Test Report: KVM_Linux_crio 20053

                    
                      ee589ed5f2e38de21e277596fb8e32edfda5a06e:2024-12-05:37358
                    
                

Test fail (32/315)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.89
38 TestAddons/parallel/MetricsServer 302.46
47 TestAddons/StoppedEnableDisable 154.43
168 TestMultiControlPlane/serial/StopSecondaryNode 141.54
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 5.74
170 TestMultiControlPlane/serial/RestartSecondaryNode 6.39
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 6.22
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 382.82
175 TestMultiControlPlane/serial/StopCluster 142
235 TestMultiNode/serial/RestartKeepsNodes 334.78
237 TestMultiNode/serial/StopMultiNode 145.29
244 TestPreload 171.79
252 TestKubernetesUpgrade 395.61
289 TestPause/serial/SecondStartNoReconfiguration 77.23
322 TestStartStop/group/old-k8s-version/serial/FirstStart 289.55
343 TestStartStop/group/embed-certs/serial/Stop 139
347 TestStartStop/group/no-preload/serial/Stop 139.15
349 TestStartStop/group/default-k8s-diff-port/serial/Stop 138.99
350 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
352 TestStartStop/group/old-k8s-version/serial/DeployApp 0.53
353 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 97.78
354 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.42
355 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.42
360 TestStartStop/group/old-k8s-version/serial/SecondStart 704.37
361 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.47
362 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.5
363 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.53
364 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.76
365 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 474.99
366 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 403.21
367 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 336.81
368 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 162.28
x
+
TestAddons/parallel/Ingress (153.89s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-523528 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-523528 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-523528 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [37992d0a-3d60-4ceb-a462-c92a92f63360] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [37992d0a-3d60-4ceb-a462-c92a92f63360] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 11.003925698s
I1205 20:23:08.009487  300765 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-523528 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.646109436s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-523528 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.217
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-523528 -n addons-523528
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 logs -n 25: (1.25898201s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                                                                       | minikube             | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| delete  | -p download-only-565473                                                                     | download-only-565473 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| delete  | -p download-only-401320                                                                     | download-only-401320 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| delete  | -p download-only-565473                                                                     | download-only-565473 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-326413 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | binary-mirror-326413                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39531                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-326413                                                                     | binary-mirror-326413 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| addons  | disable dashboard -p                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | addons-523528                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | addons-523528                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-523528 --wait=true                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:22 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | -p addons-523528                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-523528 ip                                                                            | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-523528 ssh cat                                                                       | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | /opt/local-path-provisioner/pvc-24f2de26-a653-44d0-af2f-07e5589c431c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:23 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-523528 ssh curl -s                                                                   | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-523528 ip                                                                            | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:19:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:19:33.246721  301384 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:19:33.246895  301384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:33.246911  301384 out.go:358] Setting ErrFile to fd 2...
	I1205 20:19:33.246920  301384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:33.247530  301384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:19:33.248344  301384 out.go:352] Setting JSON to false
	I1205 20:19:33.249295  301384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10921,"bootTime":1733419052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:19:33.249416  301384 start.go:139] virtualization: kvm guest
	I1205 20:19:33.251428  301384 out.go:177] * [addons-523528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:19:33.253108  301384 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:19:33.253112  301384 notify.go:220] Checking for updates...
	I1205 20:19:33.255698  301384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:19:33.256948  301384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:19:33.258343  301384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:33.259719  301384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:19:33.261114  301384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:19:33.262778  301384 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:19:33.298479  301384 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:19:33.299905  301384 start.go:297] selected driver: kvm2
	I1205 20:19:33.299922  301384 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:19:33.299937  301384 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:19:33.300810  301384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:33.300904  301384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:19:33.317692  301384 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:19:33.319078  301384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:19:33.319637  301384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:19:33.319691  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:19:33.319957  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:19:33.319987  301384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:19:33.320088  301384 start.go:340] cluster config:
	{Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:19:33.320252  301384 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:33.323007  301384 out.go:177] * Starting "addons-523528" primary control-plane node in "addons-523528" cluster
	I1205 20:19:33.324286  301384 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:19:33.324329  301384 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:33.324350  301384 cache.go:56] Caching tarball of preloaded images
	I1205 20:19:33.324452  301384 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:19:33.324464  301384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:19:33.324776  301384 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json ...
	I1205 20:19:33.324803  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json: {Name:mkcf83816102e2d1597e39187ac57c2e822fd009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:19:33.324947  301384 start.go:360] acquireMachinesLock for addons-523528: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:19:33.324993  301384 start.go:364] duration metric: took 32.034µs to acquireMachinesLock for "addons-523528"
	I1205 20:19:33.325009  301384 start.go:93] Provisioning new machine with config: &{Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:19:33.325069  301384 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:19:33.326853  301384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 20:19:33.327013  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:19:33.327042  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:19:33.342837  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I1205 20:19:33.343441  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:19:33.344100  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:19:33.344124  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:19:33.344563  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:19:33.344796  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:19:33.344996  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:19:33.345197  301384 start.go:159] libmachine.API.Create for "addons-523528" (driver="kvm2")
	I1205 20:19:33.345228  301384 client.go:168] LocalClient.Create starting
	I1205 20:19:33.345277  301384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:19:33.458356  301384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:19:33.562438  301384 main.go:141] libmachine: Running pre-create checks...
	I1205 20:19:33.562468  301384 main.go:141] libmachine: (addons-523528) Calling .PreCreateCheck
	I1205 20:19:33.563032  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:19:33.563519  301384 main.go:141] libmachine: Creating machine...
	I1205 20:19:33.563535  301384 main.go:141] libmachine: (addons-523528) Calling .Create
	I1205 20:19:33.563705  301384 main.go:141] libmachine: (addons-523528) Creating KVM machine...
	I1205 20:19:33.565245  301384 main.go:141] libmachine: (addons-523528) DBG | found existing default KVM network
	I1205 20:19:33.566208  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.566011  301406 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I1205 20:19:33.566244  301384 main.go:141] libmachine: (addons-523528) DBG | created network xml: 
	I1205 20:19:33.566263  301384 main.go:141] libmachine: (addons-523528) DBG | <network>
	I1205 20:19:33.566272  301384 main.go:141] libmachine: (addons-523528) DBG |   <name>mk-addons-523528</name>
	I1205 20:19:33.566281  301384 main.go:141] libmachine: (addons-523528) DBG |   <dns enable='no'/>
	I1205 20:19:33.566287  301384 main.go:141] libmachine: (addons-523528) DBG |   
	I1205 20:19:33.566298  301384 main.go:141] libmachine: (addons-523528) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:19:33.566311  301384 main.go:141] libmachine: (addons-523528) DBG |     <dhcp>
	I1205 20:19:33.566321  301384 main.go:141] libmachine: (addons-523528) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:19:33.566329  301384 main.go:141] libmachine: (addons-523528) DBG |     </dhcp>
	I1205 20:19:33.566343  301384 main.go:141] libmachine: (addons-523528) DBG |   </ip>
	I1205 20:19:33.566356  301384 main.go:141] libmachine: (addons-523528) DBG |   
	I1205 20:19:33.566393  301384 main.go:141] libmachine: (addons-523528) DBG | </network>
	I1205 20:19:33.566417  301384 main.go:141] libmachine: (addons-523528) DBG | 
	I1205 20:19:33.571829  301384 main.go:141] libmachine: (addons-523528) DBG | trying to create private KVM network mk-addons-523528 192.168.39.0/24...
	I1205 20:19:33.642502  301384 main.go:141] libmachine: (addons-523528) DBG | private KVM network mk-addons-523528 192.168.39.0/24 created
	I1205 20:19:33.642591  301384 main.go:141] libmachine: (addons-523528) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 ...
	I1205 20:19:33.642622  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.642471  301406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:33.642673  301384 main.go:141] libmachine: (addons-523528) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:19:33.642704  301384 main.go:141] libmachine: (addons-523528) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:19:33.934614  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.934435  301406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa...
	I1205 20:19:34.038956  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:34.038770  301406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/addons-523528.rawdisk...
	I1205 20:19:34.038990  301384 main.go:141] libmachine: (addons-523528) DBG | Writing magic tar header
	I1205 20:19:34.039002  301384 main.go:141] libmachine: (addons-523528) DBG | Writing SSH key tar header
	I1205 20:19:34.039010  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:34.038905  301406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 ...
	I1205 20:19:34.039024  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528
	I1205 20:19:34.039095  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 (perms=drwx------)
	I1205 20:19:34.039137  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:19:34.039145  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:19:34.039170  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:19:34.039183  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:19:34.039192  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:34.039206  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:19:34.039214  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:19:34.039223  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:19:34.039231  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home
	I1205 20:19:34.039238  301384 main.go:141] libmachine: (addons-523528) DBG | Skipping /home - not owner
	I1205 20:19:34.039245  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:19:34.039250  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:19:34.039258  301384 main.go:141] libmachine: (addons-523528) Creating domain...
	I1205 20:19:34.040571  301384 main.go:141] libmachine: (addons-523528) define libvirt domain using xml: 
	I1205 20:19:34.040618  301384 main.go:141] libmachine: (addons-523528) <domain type='kvm'>
	I1205 20:19:34.040627  301384 main.go:141] libmachine: (addons-523528)   <name>addons-523528</name>
	I1205 20:19:34.040632  301384 main.go:141] libmachine: (addons-523528)   <memory unit='MiB'>4000</memory>
	I1205 20:19:34.040638  301384 main.go:141] libmachine: (addons-523528)   <vcpu>2</vcpu>
	I1205 20:19:34.040647  301384 main.go:141] libmachine: (addons-523528)   <features>
	I1205 20:19:34.040653  301384 main.go:141] libmachine: (addons-523528)     <acpi/>
	I1205 20:19:34.040660  301384 main.go:141] libmachine: (addons-523528)     <apic/>
	I1205 20:19:34.040666  301384 main.go:141] libmachine: (addons-523528)     <pae/>
	I1205 20:19:34.040670  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040676  301384 main.go:141] libmachine: (addons-523528)   </features>
	I1205 20:19:34.040682  301384 main.go:141] libmachine: (addons-523528)   <cpu mode='host-passthrough'>
	I1205 20:19:34.040687  301384 main.go:141] libmachine: (addons-523528)   
	I1205 20:19:34.040698  301384 main.go:141] libmachine: (addons-523528)   </cpu>
	I1205 20:19:34.040704  301384 main.go:141] libmachine: (addons-523528)   <os>
	I1205 20:19:34.040711  301384 main.go:141] libmachine: (addons-523528)     <type>hvm</type>
	I1205 20:19:34.040721  301384 main.go:141] libmachine: (addons-523528)     <boot dev='cdrom'/>
	I1205 20:19:34.040729  301384 main.go:141] libmachine: (addons-523528)     <boot dev='hd'/>
	I1205 20:19:34.040735  301384 main.go:141] libmachine: (addons-523528)     <bootmenu enable='no'/>
	I1205 20:19:34.040742  301384 main.go:141] libmachine: (addons-523528)   </os>
	I1205 20:19:34.040748  301384 main.go:141] libmachine: (addons-523528)   <devices>
	I1205 20:19:34.040757  301384 main.go:141] libmachine: (addons-523528)     <disk type='file' device='cdrom'>
	I1205 20:19:34.040767  301384 main.go:141] libmachine: (addons-523528)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/boot2docker.iso'/>
	I1205 20:19:34.040779  301384 main.go:141] libmachine: (addons-523528)       <target dev='hdc' bus='scsi'/>
	I1205 20:19:34.040785  301384 main.go:141] libmachine: (addons-523528)       <readonly/>
	I1205 20:19:34.040794  301384 main.go:141] libmachine: (addons-523528)     </disk>
	I1205 20:19:34.040801  301384 main.go:141] libmachine: (addons-523528)     <disk type='file' device='disk'>
	I1205 20:19:34.040808  301384 main.go:141] libmachine: (addons-523528)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:19:34.040816  301384 main.go:141] libmachine: (addons-523528)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/addons-523528.rawdisk'/>
	I1205 20:19:34.040822  301384 main.go:141] libmachine: (addons-523528)       <target dev='hda' bus='virtio'/>
	I1205 20:19:34.040827  301384 main.go:141] libmachine: (addons-523528)     </disk>
	I1205 20:19:34.040832  301384 main.go:141] libmachine: (addons-523528)     <interface type='network'>
	I1205 20:19:34.040839  301384 main.go:141] libmachine: (addons-523528)       <source network='mk-addons-523528'/>
	I1205 20:19:34.040847  301384 main.go:141] libmachine: (addons-523528)       <model type='virtio'/>
	I1205 20:19:34.040852  301384 main.go:141] libmachine: (addons-523528)     </interface>
	I1205 20:19:34.040857  301384 main.go:141] libmachine: (addons-523528)     <interface type='network'>
	I1205 20:19:34.040864  301384 main.go:141] libmachine: (addons-523528)       <source network='default'/>
	I1205 20:19:34.040869  301384 main.go:141] libmachine: (addons-523528)       <model type='virtio'/>
	I1205 20:19:34.040877  301384 main.go:141] libmachine: (addons-523528)     </interface>
	I1205 20:19:34.040882  301384 main.go:141] libmachine: (addons-523528)     <serial type='pty'>
	I1205 20:19:34.040888  301384 main.go:141] libmachine: (addons-523528)       <target port='0'/>
	I1205 20:19:34.040893  301384 main.go:141] libmachine: (addons-523528)     </serial>
	I1205 20:19:34.040934  301384 main.go:141] libmachine: (addons-523528)     <console type='pty'>
	I1205 20:19:34.040952  301384 main.go:141] libmachine: (addons-523528)       <target type='serial' port='0'/>
	I1205 20:19:34.040959  301384 main.go:141] libmachine: (addons-523528)     </console>
	I1205 20:19:34.040965  301384 main.go:141] libmachine: (addons-523528)     <rng model='virtio'>
	I1205 20:19:34.040973  301384 main.go:141] libmachine: (addons-523528)       <backend model='random'>/dev/random</backend>
	I1205 20:19:34.040978  301384 main.go:141] libmachine: (addons-523528)     </rng>
	I1205 20:19:34.040984  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040993  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040998  301384 main.go:141] libmachine: (addons-523528)   </devices>
	I1205 20:19:34.041003  301384 main.go:141] libmachine: (addons-523528) </domain>
	I1205 20:19:34.041011  301384 main.go:141] libmachine: (addons-523528) 
	I1205 20:19:34.045612  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:0a:84:86 in network default
	I1205 20:19:34.046248  301384 main.go:141] libmachine: (addons-523528) Ensuring networks are active...
	I1205 20:19:34.046297  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:34.046991  301384 main.go:141] libmachine: (addons-523528) Ensuring network default is active
	I1205 20:19:34.047304  301384 main.go:141] libmachine: (addons-523528) Ensuring network mk-addons-523528 is active
	I1205 20:19:34.047857  301384 main.go:141] libmachine: (addons-523528) Getting domain xml...
	I1205 20:19:34.048710  301384 main.go:141] libmachine: (addons-523528) Creating domain...
	I1205 20:19:35.303598  301384 main.go:141] libmachine: (addons-523528) Waiting to get IP...
	I1205 20:19:35.304435  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.304927  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.304979  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.304915  301406 retry.go:31] will retry after 288.523272ms: waiting for machine to come up
	I1205 20:19:35.595694  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.596190  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.596226  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.596142  301406 retry.go:31] will retry after 260.471732ms: waiting for machine to come up
	I1205 20:19:35.858781  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.859323  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.859357  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.859261  301406 retry.go:31] will retry after 407.556596ms: waiting for machine to come up
	I1205 20:19:36.269223  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:36.269706  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:36.269731  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:36.269668  301406 retry.go:31] will retry after 375.887724ms: waiting for machine to come up
	I1205 20:19:36.647392  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:36.647749  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:36.647781  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:36.647693  301406 retry.go:31] will retry after 684.620456ms: waiting for machine to come up
	I1205 20:19:37.333667  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:37.334176  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:37.334201  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:37.334125  301406 retry.go:31] will retry after 925.442052ms: waiting for machine to come up
	I1205 20:19:38.261294  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:38.261731  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:38.261759  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:38.261690  301406 retry.go:31] will retry after 1.016520828s: waiting for machine to come up
	I1205 20:19:39.279596  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:39.280130  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:39.280166  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:39.280075  301406 retry.go:31] will retry after 1.34038701s: waiting for machine to come up
	I1205 20:19:40.623073  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:40.623631  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:40.623665  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:40.623550  301406 retry.go:31] will retry after 1.472535213s: waiting for machine to come up
	I1205 20:19:42.098424  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:42.098929  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:42.098959  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:42.098881  301406 retry.go:31] will retry after 1.790209374s: waiting for machine to come up
	I1205 20:19:43.891291  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:43.891867  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:43.891900  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:43.891804  301406 retry.go:31] will retry after 2.201804102s: waiting for machine to come up
	I1205 20:19:46.096364  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:46.096908  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:46.096933  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:46.096849  301406 retry.go:31] will retry after 2.743938954s: waiting for machine to come up
	I1205 20:19:48.842851  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:48.844025  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:48.844054  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:48.843955  301406 retry.go:31] will retry after 3.796103066s: waiting for machine to come up
	I1205 20:19:52.644983  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:52.645362  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:52.645388  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:52.645313  301406 retry.go:31] will retry after 4.704422991s: waiting for machine to come up
	I1205 20:19:57.354576  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.355137  301384 main.go:141] libmachine: (addons-523528) Found IP for machine: 192.168.39.217
	I1205 20:19:57.355160  301384 main.go:141] libmachine: (addons-523528) Reserving static IP address...
	I1205 20:19:57.355168  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has current primary IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.355501  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find host DHCP lease matching {name: "addons-523528", mac: "52:54:00:94:d3:2c", ip: "192.168.39.217"} in network mk-addons-523528
	I1205 20:19:57.447752  301384 main.go:141] libmachine: (addons-523528) DBG | Getting to WaitForSSH function...
	I1205 20:19:57.447801  301384 main.go:141] libmachine: (addons-523528) Reserved static IP address: 192.168.39.217
	I1205 20:19:57.447815  301384 main.go:141] libmachine: (addons-523528) Waiting for SSH to be available...
	I1205 20:19:57.450644  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.451038  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528
	I1205 20:19:57.451069  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find defined IP address of network mk-addons-523528 interface with MAC address 52:54:00:94:d3:2c
	I1205 20:19:57.451181  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH client type: external
	I1205 20:19:57.451217  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa (-rw-------)
	I1205 20:19:57.451251  301384 main.go:141] libmachine: (addons-523528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:19:57.451274  301384 main.go:141] libmachine: (addons-523528) DBG | About to run SSH command:
	I1205 20:19:57.451294  301384 main.go:141] libmachine: (addons-523528) DBG | exit 0
	I1205 20:19:57.455367  301384 main.go:141] libmachine: (addons-523528) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:19:57.455392  301384 main.go:141] libmachine: (addons-523528) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:19:57.455399  301384 main.go:141] libmachine: (addons-523528) DBG | command : exit 0
	I1205 20:19:57.455404  301384 main.go:141] libmachine: (addons-523528) DBG | err     : exit status 255
	I1205 20:19:57.455412  301384 main.go:141] libmachine: (addons-523528) DBG | output  : 
	I1205 20:20:00.457296  301384 main.go:141] libmachine: (addons-523528) DBG | Getting to WaitForSSH function...
	I1205 20:20:00.460357  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.460919  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.460956  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.461160  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH client type: external
	I1205 20:20:00.461181  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa (-rw-------)
	I1205 20:20:00.461259  301384 main.go:141] libmachine: (addons-523528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:20:00.461294  301384 main.go:141] libmachine: (addons-523528) DBG | About to run SSH command:
	I1205 20:20:00.461344  301384 main.go:141] libmachine: (addons-523528) DBG | exit 0
	I1205 20:20:00.590288  301384 main.go:141] libmachine: (addons-523528) DBG | SSH cmd err, output: <nil>: 
	I1205 20:20:00.590641  301384 main.go:141] libmachine: (addons-523528) KVM machine creation complete!
	I1205 20:20:00.590987  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:20:00.596577  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:00.596924  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:00.597140  301384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:20:00.597159  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:00.598613  301384 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:20:00.598633  301384 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:20:00.598641  301384 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:20:00.598650  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.601112  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.601483  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.601512  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.601677  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.601857  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.602010  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.602165  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.602329  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.602550  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.602560  301384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:20:00.713436  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:20:00.713464  301384 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:20:00.713474  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.716766  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.717245  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.717285  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.717501  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.717762  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.717964  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.718114  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.718258  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.718481  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.718496  301384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:20:00.834874  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:20:00.834951  301384 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:20:00.834958  301384 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:20:00.834967  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:00.835251  301384 buildroot.go:166] provisioning hostname "addons-523528"
	I1205 20:20:00.835286  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:00.835528  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.838610  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.839001  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.839036  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.839175  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.839388  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.839575  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.839753  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.839893  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.840077  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.840096  301384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-523528 && echo "addons-523528" | sudo tee /etc/hostname
	I1205 20:20:00.967639  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523528
	
	I1205 20:20:00.967677  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.970568  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.970855  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.970885  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.971093  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.971338  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.971573  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.971714  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.971873  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.972117  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.972136  301384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-523528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-523528/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-523528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:20:01.094575  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:20:01.094613  301384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:20:01.094673  301384 buildroot.go:174] setting up certificates
	I1205 20:20:01.094710  301384 provision.go:84] configureAuth start
	I1205 20:20:01.094730  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:01.095100  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:01.098450  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.098923  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.098949  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.099211  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.101814  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.102196  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.102230  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.102382  301384 provision.go:143] copyHostCerts
	I1205 20:20:01.102490  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:20:01.102645  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:20:01.102736  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:20:01.102820  301384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.addons-523528 san=[127.0.0.1 192.168.39.217 addons-523528 localhost minikube]
	I1205 20:20:01.454061  301384 provision.go:177] copyRemoteCerts
	I1205 20:20:01.454146  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:20:01.454177  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.456887  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.457283  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.457324  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.457495  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.457770  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.458019  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.458249  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:01.544283  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:20:01.568008  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:20:01.591863  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:20:01.615378  301384 provision.go:87] duration metric: took 520.644427ms to configureAuth
	I1205 20:20:01.615420  301384 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:20:01.615616  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:01.615707  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.618585  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.619019  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.619056  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.619206  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.619439  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.619601  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.619751  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.619955  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:01.620152  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:01.620180  301384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:20:01.848966  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:20:01.849000  301384 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:20:01.849011  301384 main.go:141] libmachine: (addons-523528) Calling .GetURL
	I1205 20:20:01.850481  301384 main.go:141] libmachine: (addons-523528) DBG | Using libvirt version 6000000
	I1205 20:20:01.853191  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.853510  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.853549  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.853715  301384 main.go:141] libmachine: Docker is up and running!
	I1205 20:20:01.853729  301384 main.go:141] libmachine: Reticulating splines...
	I1205 20:20:01.853739  301384 client.go:171] duration metric: took 28.508501597s to LocalClient.Create
	I1205 20:20:01.853772  301384 start.go:167] duration metric: took 28.508575657s to libmachine.API.Create "addons-523528"
	I1205 20:20:01.853784  301384 start.go:293] postStartSetup for "addons-523528" (driver="kvm2")
	I1205 20:20:01.853795  301384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:20:01.853813  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:01.854094  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:20:01.854122  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.856663  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.857087  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.857120  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.857358  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.857579  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.857758  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.857894  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:01.944604  301384 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:20:01.949011  301384 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:20:01.949060  301384 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:20:01.949150  301384 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:20:01.949180  301384 start.go:296] duration metric: took 95.390319ms for postStartSetup
	I1205 20:20:01.949221  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:20:01.949963  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:01.952800  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.953152  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.953180  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.953443  301384 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json ...
	I1205 20:20:01.953714  301384 start.go:128] duration metric: took 28.628629517s to createHost
	I1205 20:20:01.953760  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.956393  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.956782  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.956817  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.957005  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.957255  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.957403  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.957540  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.957694  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:01.957861  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:01.957879  301384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:20:02.070771  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430002.051669102
	
	I1205 20:20:02.070800  301384 fix.go:216] guest clock: 1733430002.051669102
	I1205 20:20:02.070811  301384 fix.go:229] Guest: 2024-12-05 20:20:02.051669102 +0000 UTC Remote: 2024-12-05 20:20:01.953734015 +0000 UTC m=+28.748670592 (delta=97.935087ms)
	I1205 20:20:02.070847  301384 fix.go:200] guest clock delta is within tolerance: 97.935087ms
	I1205 20:20:02.070855  301384 start.go:83] releasing machines lock for "addons-523528", held for 28.745853584s
	I1205 20:20:02.070890  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.071210  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:02.074491  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.074950  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.074989  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.075093  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.075698  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.075894  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.076014  301384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:20:02.076077  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:02.076164  301384 ssh_runner.go:195] Run: cat /version.json
	I1205 20:20:02.076195  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:02.078877  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079281  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.079311  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079430  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079459  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:02.079735  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:02.079837  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.079862  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079942  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:02.080064  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:02.080177  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:02.080212  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:02.080349  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:02.080506  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:02.158917  301384 ssh_runner.go:195] Run: systemctl --version
	I1205 20:20:02.184609  301384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:20:02.347283  301384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:20:02.353263  301384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:20:02.353365  301384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:20:02.370200  301384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:20:02.370235  301384 start.go:495] detecting cgroup driver to use...
	I1205 20:20:02.370324  301384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:20:02.386928  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:20:02.401423  301384 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:20:02.401496  301384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:20:02.416460  301384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:20:02.430813  301384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:20:02.546439  301384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:20:02.682728  301384 docker.go:233] disabling docker service ...
	I1205 20:20:02.682808  301384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:20:02.697612  301384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:20:02.712529  301384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:20:02.860566  301384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:20:02.978121  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:20:02.992020  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:20:03.011713  301384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:20:03.011796  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.022367  301384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:20:03.022467  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.032974  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.043656  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.054139  301384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:20:03.065871  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.076197  301384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.093547  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.103722  301384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:20:03.112881  301384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:20:03.112993  301384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:20:03.125464  301384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:20:03.134972  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:03.249798  301384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:20:03.335041  301384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:20:03.335152  301384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:20:03.339814  301384 start.go:563] Will wait 60s for crictl version
	I1205 20:20:03.339902  301384 ssh_runner.go:195] Run: which crictl
	I1205 20:20:03.343676  301384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:20:03.380554  301384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:20:03.380672  301384 ssh_runner.go:195] Run: crio --version
	I1205 20:20:03.409215  301384 ssh_runner.go:195] Run: crio --version
	I1205 20:20:03.439015  301384 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:20:03.440399  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:03.443263  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:03.443531  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:03.443562  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:03.443822  301384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:20:03.447905  301384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:20:03.460388  301384 kubeadm.go:883] updating cluster {Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:20:03.460561  301384 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:20:03.460624  301384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:20:03.491301  301384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:20:03.491386  301384 ssh_runner.go:195] Run: which lz4
	I1205 20:20:03.495365  301384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:20:03.499531  301384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:20:03.499577  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:20:04.766466  301384 crio.go:462] duration metric: took 1.271160326s to copy over tarball
	I1205 20:20:04.766554  301384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:20:06.930598  301384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16400808s)
	I1205 20:20:06.930634  301384 crio.go:469] duration metric: took 2.164129344s to extract the tarball
	I1205 20:20:06.930645  301384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:20:06.967895  301384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:20:07.013541  301384 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:20:07.013570  301384 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:20:07.013580  301384 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.2 crio true true} ...
	I1205 20:20:07.013702  301384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-523528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:20:07.013781  301384 ssh_runner.go:195] Run: crio config
	I1205 20:20:07.060987  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:20:07.061014  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:20:07.061029  301384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:20:07.061054  301384 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-523528 NodeName:addons-523528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:20:07.061225  301384 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-523528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:20:07.061299  301384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:20:07.071252  301384 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:20:07.071347  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:20:07.081004  301384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:20:07.098213  301384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:20:07.115425  301384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1205 20:20:07.133878  301384 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I1205 20:20:07.137805  301384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:20:07.150232  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:07.263153  301384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:20:07.279871  301384 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528 for IP: 192.168.39.217
	I1205 20:20:07.279909  301384 certs.go:194] generating shared ca certs ...
	I1205 20:20:07.279937  301384 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.280135  301384 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:20:07.395635  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt ...
	I1205 20:20:07.395667  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt: {Name:mk598ca4d7b2f2ba8ce81c3c8132e48b13537f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.395862  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key ...
	I1205 20:20:07.395878  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key: {Name:mk83416aa7315c4e40f0f1eeff10d00de09bd0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.395960  301384 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:20:07.654379  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt ...
	I1205 20:20:07.654416  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt: {Name:mkc5cdb7edc0f3ac1fb912d4d8803c8e80c04ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.654603  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key ...
	I1205 20:20:07.654616  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key: {Name:mkce7d3edd856411f5d2ba3b813e7b9cfd75334b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.654691  301384 certs.go:256] generating profile certs ...
	I1205 20:20:07.654757  301384 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key
	I1205 20:20:07.654774  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt with IP's: []
	I1205 20:20:07.727531  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt ...
	I1205 20:20:07.727565  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: {Name:mkcdb6845fb061759df75a93736df390b88fb800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.727745  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key ...
	I1205 20:20:07.727758  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key: {Name:mk2e69a255c11c2698ff991f2164ea3226b1f8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.727827  301384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a
	I1205 20:20:07.727843  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217]
	I1205 20:20:07.857159  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a ...
	I1205 20:20:07.857204  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a: {Name:mkfba0a63780c9d7cc7f608e78abcaa750f1d22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.857418  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a ...
	I1205 20:20:07.857435  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a: {Name:mkcbe7ab3675156acb84d5a1d625e8d5861e03bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.857521  301384 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt
	I1205 20:20:07.857608  301384 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key
	I1205 20:20:07.857666  301384 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key
	I1205 20:20:07.857688  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt with IP's: []
	I1205 20:20:08.186597  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt ...
	I1205 20:20:08.186647  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt: {Name:mk528f606870a34a4fe0369fd12aef887f8e944e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:08.186918  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key ...
	I1205 20:20:08.186945  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key: {Name:mk8021fb4afe372e79599f2055cd1222512cfb7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:08.187260  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:20:08.187331  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:20:08.187382  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:20:08.187429  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:20:08.188452  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:20:08.220004  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:20:08.251359  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:20:08.279067  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:20:08.303021  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:20:08.331083  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:20:08.355025  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:20:08.379971  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:20:08.405445  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:20:08.429467  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:20:08.446144  301384 ssh_runner.go:195] Run: openssl version
	I1205 20:20:08.453301  301384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:20:08.465812  301384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.470188  301384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.470257  301384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.475886  301384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:20:08.487068  301384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:20:08.491434  301384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:20:08.491495  301384 kubeadm.go:392] StartCluster: {Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:20:08.491585  301384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:20:08.491667  301384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:20:08.525643  301384 cri.go:89] found id: ""
	I1205 20:20:08.525750  301384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:20:08.536676  301384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:20:08.546447  301384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:20:08.556866  301384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:20:08.556892  301384 kubeadm.go:157] found existing configuration files:
	
	I1205 20:20:08.556944  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:20:08.566009  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:20:08.566082  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:20:08.575970  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:20:08.585013  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:20:08.585103  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:20:08.594776  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:20:08.604052  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:20:08.604134  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:20:08.613849  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:20:08.623651  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:20:08.623716  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:20:08.633714  301384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:20:08.679513  301384 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:20:08.679591  301384 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:20:08.772925  301384 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:20:08.773040  301384 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:20:08.773130  301384 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:20:08.783233  301384 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:20:08.870466  301384 out.go:235]   - Generating certificates and keys ...
	I1205 20:20:08.870629  301384 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:20:08.870720  301384 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:20:09.152354  301384 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:20:09.358827  301384 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:20:09.572333  301384 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:20:09.858133  301384 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:20:09.972048  301384 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:20:09.972188  301384 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-523528 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I1205 20:20:10.150273  301384 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:20:10.150475  301384 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-523528 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I1205 20:20:10.297718  301384 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:20:10.461418  301384 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:20:10.702437  301384 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:20:10.702552  301384 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:20:10.935886  301384 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:20:11.026397  301384 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:20:11.105329  301384 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:20:11.199772  301384 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:20:11.470328  301384 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:20:11.470791  301384 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:20:11.473121  301384 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:20:11.476103  301384 out.go:235]   - Booting up control plane ...
	I1205 20:20:11.476237  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:20:11.476331  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:20:11.476420  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:20:11.491125  301384 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:20:11.498885  301384 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:20:11.498941  301384 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:20:11.631538  301384 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:20:11.631702  301384 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:20:12.133170  301384 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.156057ms
	I1205 20:20:12.133266  301384 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:20:17.133570  301384 kubeadm.go:310] [api-check] The API server is healthy after 5.002595818s
	I1205 20:20:17.146482  301384 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:20:17.166198  301384 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:20:17.198097  301384 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:20:17.198388  301384 kubeadm.go:310] [mark-control-plane] Marking the node addons-523528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:20:17.211782  301384 kubeadm.go:310] [bootstrap-token] Using token: uojl9c.ccr1m56n9aagwo8s
	I1205 20:20:17.213476  301384 out.go:235]   - Configuring RBAC rules ...
	I1205 20:20:17.213649  301384 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:20:17.224888  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:20:17.236418  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:20:17.239866  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:20:17.243989  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:20:17.248377  301384 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:20:17.541238  301384 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:20:17.976752  301384 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:20:18.540875  301384 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:20:18.540908  301384 kubeadm.go:310] 
	I1205 20:20:18.541001  301384 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:20:18.541011  301384 kubeadm.go:310] 
	I1205 20:20:18.541147  301384 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:20:18.541161  301384 kubeadm.go:310] 
	I1205 20:20:18.541194  301384 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:20:18.541305  301384 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:20:18.541401  301384 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:20:18.541415  301384 kubeadm.go:310] 
	I1205 20:20:18.541487  301384 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:20:18.541495  301384 kubeadm.go:310] 
	I1205 20:20:18.541572  301384 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:20:18.541583  301384 kubeadm.go:310] 
	I1205 20:20:18.541666  301384 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:20:18.541775  301384 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:20:18.541866  301384 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:20:18.541875  301384 kubeadm.go:310] 
	I1205 20:20:18.542003  301384 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:20:18.542118  301384 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:20:18.542136  301384 kubeadm.go:310] 
	I1205 20:20:18.542244  301384 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uojl9c.ccr1m56n9aagwo8s \
	I1205 20:20:18.542375  301384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:20:18.542418  301384 kubeadm.go:310] 	--control-plane 
	I1205 20:20:18.542434  301384 kubeadm.go:310] 
	I1205 20:20:18.542539  301384 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:20:18.542552  301384 kubeadm.go:310] 
	I1205 20:20:18.542658  301384 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uojl9c.ccr1m56n9aagwo8s \
	I1205 20:20:18.542780  301384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:20:18.543064  301384 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:20:18.543110  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:20:18.543123  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:20:18.544734  301384 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:20:18.546375  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:20:18.558851  301384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:20:18.579242  301384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:20:18.579314  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:18.579314  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-523528 minikube.k8s.io/updated_at=2024_12_05T20_20_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=addons-523528 minikube.k8s.io/primary=true
	I1205 20:20:18.611554  301384 ops.go:34] apiserver oom_adj: -16
	I1205 20:20:18.702225  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:19.202392  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:19.703359  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:20.202551  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:20.703209  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:21.203095  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:21.703342  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.202519  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.702733  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.814614  301384 kubeadm.go:1113] duration metric: took 4.235374312s to wait for elevateKubeSystemPrivileges
	I1205 20:20:22.814672  301384 kubeadm.go:394] duration metric: took 14.323182287s to StartCluster
	I1205 20:20:22.814698  301384 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:22.814852  301384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:20:22.815376  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:22.815614  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:20:22.815666  301384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:20:22.815725  301384 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 20:20:22.815870  301384 addons.go:69] Setting yakd=true in profile "addons-523528"
	I1205 20:20:22.815886  301384 addons.go:69] Setting inspektor-gadget=true in profile "addons-523528"
	I1205 20:20:22.815910  301384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-523528"
	I1205 20:20:22.815912  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:22.815927  301384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-523528"
	I1205 20:20:22.815943  301384 addons.go:69] Setting metrics-server=true in profile "addons-523528"
	I1205 20:20:22.815955  301384 addons.go:69] Setting cloud-spanner=true in profile "addons-523528"
	I1205 20:20:22.815956  301384 addons.go:69] Setting volumesnapshots=true in profile "addons-523528"
	I1205 20:20:22.815956  301384 addons.go:69] Setting volcano=true in profile "addons-523528"
	I1205 20:20:22.815967  301384 addons.go:234] Setting addon cloud-spanner=true in "addons-523528"
	I1205 20:20:22.815969  301384 addons.go:69] Setting gcp-auth=true in profile "addons-523528"
	I1205 20:20:22.815971  301384 addons.go:234] Setting addon volumesnapshots=true in "addons-523528"
	I1205 20:20:22.815977  301384 addons.go:234] Setting addon volcano=true in "addons-523528"
	I1205 20:20:22.815988  301384 mustload.go:65] Loading cluster: addons-523528
	I1205 20:20:22.815993  301384 addons.go:69] Setting ingress-dns=true in profile "addons-523528"
	I1205 20:20:22.816003  301384 addons.go:234] Setting addon ingress-dns=true in "addons-523528"
	I1205 20:20:22.816010  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816018  301384 addons.go:69] Setting default-storageclass=true in profile "addons-523528"
	I1205 20:20:22.816038  301384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-523528"
	I1205 20:20:22.816048  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.815918  301384 addons.go:234] Setting addon inspektor-gadget=true in "addons-523528"
	I1205 20:20:22.816158  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816188  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:22.815931  301384 addons.go:69] Setting registry=true in profile "addons-523528"
	I1205 20:20:22.816245  301384 addons.go:234] Setting addon registry=true in "addons-523528"
	I1205 20:20:22.816288  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816530  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816552  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816576  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816575  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816578  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816613  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816624  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815981  301384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-523528"
	I1205 20:20:22.816661  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816697  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816707  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816019  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816697  301384 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-523528"
	I1205 20:20:22.816933  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817110  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817187  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816010  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817327  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817363  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815946  301384 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-523528"
	I1205 20:20:22.817606  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817747  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817782  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.818008  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818041  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815898  301384 addons.go:234] Setting addon yakd=true in "addons-523528"
	I1205 20:20:22.818174  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.815914  301384 addons.go:69] Setting storage-provisioner=true in profile "addons-523528"
	I1205 20:20:22.818426  301384 addons.go:234] Setting addon storage-provisioner=true in "addons-523528"
	I1205 20:20:22.818462  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.818521  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818583  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815932  301384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-523528"
	I1205 20:20:22.818815  301384 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-523528"
	I1205 20:20:22.818848  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.818853  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818886  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815947  301384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-523528"
	I1205 20:20:22.815962  301384 addons.go:234] Setting addon metrics-server=true in "addons-523528"
	I1205 20:20:22.819464  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.819520  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.819555  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.819672  301384 out.go:177] * Verifying Kubernetes components...
	I1205 20:20:22.815971  301384 addons.go:69] Setting ingress=true in profile "addons-523528"
	I1205 20:20:22.819861  301384 addons.go:234] Setting addon ingress=true in "addons-523528"
	I1205 20:20:22.819914  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.821406  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:22.816653  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.821553  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.839165  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I1205 20:20:22.839237  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34873
	I1205 20:20:22.839807  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.839895  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.840418  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.840439  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.840523  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.840540  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.840982  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.841631  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.841681  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.845241  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I1205 20:20:22.845318  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I1205 20:20:22.845355  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.845406  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1205 20:20:22.845620  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.847738  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.852654  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I1205 20:20:22.854304  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I1205 20:20:22.854431  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.854484  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855142  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855195  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855257  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855270  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855289  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855311  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855376  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855380  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I1205 20:20:22.855803  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855841  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.856146  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856268  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856357  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856357  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856416  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856431  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856421  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856564  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856584  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856881  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856900  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857036  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857064  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.857076  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857109  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857142  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857261  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.857576  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.857597  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857583  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.857663  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.857777  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.857803  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.857973  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.858024  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.858975  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.859013  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.859080  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.880075  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1205 20:20:22.880778  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.881522  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.881557  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.882029  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.882640  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.882698  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.893034  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1205 20:20:22.893560  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.894207  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.894241  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.894709  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.894896  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.904447  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I1205 20:20:22.906420  301384 addons.go:234] Setting addon default-storageclass=true in "addons-523528"
	I1205 20:20:22.906479  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.906949  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.906993  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.908485  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I1205 20:20:22.909441  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.909486  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.910323  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.910373  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.911668  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I1205 20:20:22.911791  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1205 20:20:22.912028  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I1205 20:20:22.912107  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I1205 20:20:22.912186  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I1205 20:20:22.912307  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I1205 20:20:22.912311  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.912941  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913078  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913140  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.913161  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.913161  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913250  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913491  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.913510  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914155  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914166  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914181  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914185  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914213  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.914326  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.914359  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.914380  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914332  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.914396  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914911  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914929  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914988  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.915036  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.915088  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.915341  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.915390  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.915724  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1205 20:20:22.915861  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.916264  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I1205 20:20:22.916603  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.916619  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.916660  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.916699  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.916873  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.916912  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.917247  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.917522  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.917612  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.917674  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.917687  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.918788  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.918865  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.918878  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.918921  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.918938  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.919260  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.919399  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.919450  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.919476  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.920224  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.920294  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.920580  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.920616  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.920628  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.920752  301384 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 20:20:22.922080  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.922191  301384 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 20:20:22.922211  301384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 20:20:22.922246  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.922928  301384 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 20:20:22.923256  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.923519  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.923813  301384 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-523528"
	I1205 20:20:22.923866  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.924319  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.924389  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.924912  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.924982  301384 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:20:22.924998  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 20:20:22.925042  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.926241  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.926311  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.926622  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.926704  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.928290  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.928321  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.928422  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 20:20:22.928623  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.928854  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.929055  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.929248  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.929953  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.930701  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.930735  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.930998  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.931144  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 20:20:22.931241  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.931460  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.931675  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.933852  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 20:20:22.935120  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 20:20:22.936364  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 20:20:22.937746  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 20:20:22.938685  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I1205 20:20:22.939274  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.939908  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.939940  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.940139  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 20:20:22.940393  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.940604  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.940690  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 20:20:22.941085  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.942096  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.942125  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.942436  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 20:20:22.942634  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.942859  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.943208  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.943447  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:22.943461  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:22.944050  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:22.944063  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:22.944076  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:22.944086  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:22.944096  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:22.944126  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 20:20:22.944139  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 20:20:22.944158  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.944332  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:22.944367  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 20:20:22.944493  301384 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 20:20:22.945295  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.946808  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 20:20:22.948052  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 20:20:22.948072  301384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 20:20:22.948105  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.949939  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.950440  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.950465  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.950713  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.950917  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.951107  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.951296  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.953217  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.953671  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.953694  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.954029  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.954268  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.954469  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.954668  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.962077  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I1205 20:20:22.962974  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.963669  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.963692  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.964153  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.964398  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.965563  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I1205 20:20:22.965820  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1205 20:20:22.966471  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.966758  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.966872  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.967791  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.967811  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.968143  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.968669  301384 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 20:20:22.968731  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.968754  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.969073  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.969092  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.969172  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I1205 20:20:22.969499  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.969751  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.969821  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I1205 20:20:22.969886  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 20:20:22.969939  301384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 20:20:22.969951  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.969965  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.970338  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.970985  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.971005  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.971093  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I1205 20:20:22.971169  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.971185  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.971591  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.971667  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.971946  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.972173  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.972191  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.972373  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.972424  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.972508  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.972580  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.972925  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.973493  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.973651  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.974640  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.974664  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.974874  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.975254  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.975351  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.975402  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.975440  301384 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 20:20:22.975592  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.975905  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.977401  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 20:20:22.977458  301384 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 20:20:22.977594  301384 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:20:22.977609  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 20:20:22.977628  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.979848  301384 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 20:20:22.979866  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 20:20:22.979890  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.981492  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:22.982435  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.983007  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.983032  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.983254  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.983601  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.983775  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.983975  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.984446  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.984471  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.984492  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.984751  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.984924  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.984959  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:22.985102  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.985262  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.985572  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1205 20:20:22.985942  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.986503  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.986522  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.986913  301384 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:20:22.986936  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 20:20:22.986953  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.987798  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.988017  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.989922  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.991016  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.991503  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.991536  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.991702  301384 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 20:20:22.991737  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.991960  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.992134  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.992291  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.993098  301384 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:20:22.993128  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 20:20:22.993149  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.995857  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I1205 20:20:22.996413  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.996783  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.996931  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.996945  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.997239  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.997259  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.997483  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.997546  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.997706  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.997764  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.997847  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.998002  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.999677  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.999923  301384 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:20:22.999940  301384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:20:22.999957  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.002963  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.003462  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.003492  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.003712  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.003945  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.004128  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I1205 20:20:23.004376  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.004589  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.004945  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.005700  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.005721  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.006189  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.006390  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.006794  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I1205 20:20:23.007213  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.007900  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.007919  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.008252  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.008478  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.008904  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.010508  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.010913  301384 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 20:20:23.012705  301384 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 20:20:23.013096  301384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:20:23.014027  301384 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 20:20:23.014055  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 20:20:23.014080  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.014888  301384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:20:23.014910  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:20:23.014934  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.018430  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.018523  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019046  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.019063  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.019082  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019087  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019236  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.019312  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.019583  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.019589  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.019755  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.019762  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.019901  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.019921  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.022886  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I1205 20:20:23.023392  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.023931  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.023954  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.024253  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.024531  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.026053  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I1205 20:20:23.026343  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.026685  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.027175  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.027197  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.027572  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.027895  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.028234  301384 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 20:20:23.029574  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.029621  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:20:23.029642  301384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:20:23.029663  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.031218  301384 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 20:20:23.032470  301384 out.go:177]   - Using image docker.io/busybox:stable
	I1205 20:20:23.032873  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.033319  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.033353  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.033463  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.033659  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.033780  301384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:20:23.033803  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 20:20:23.033816  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.033825  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.034004  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	W1205 20:20:23.034783  301384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48478->192.168.39.217:22: read: connection reset by peer
	I1205 20:20:23.034818  301384 retry.go:31] will retry after 231.002032ms: ssh: handshake failed: read tcp 192.168.39.1:48478->192.168.39.217:22: read: connection reset by peer
	I1205 20:20:23.036751  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.037167  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.037197  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.037364  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.037534  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.037701  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.037837  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.265035  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 20:20:23.265064  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 20:20:23.333632  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:20:23.336562  301384 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:20:23.336599  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 20:20:23.370849  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 20:20:23.370883  301384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 20:20:23.377223  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:20:23.408318  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 20:20:23.408369  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 20:20:23.409630  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:20:23.414987  301384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:20:23.415052  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:20:23.486741  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 20:20:23.486780  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 20:20:23.489959  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:20:23.492162  301384 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 20:20:23.492192  301384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 20:20:23.495117  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:20:23.497936  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 20:20:23.534803  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:20:23.550646  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:20:23.577245  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 20:20:23.577287  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 20:20:23.587264  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 20:20:23.587306  301384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 20:20:23.592626  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 20:20:23.592656  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 20:20:23.609863  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:20:23.609894  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 20:20:23.612246  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:20:23.679258  301384 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:20:23.679289  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 20:20:23.742520  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 20:20:23.742553  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 20:20:23.752818  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 20:20:23.752852  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 20:20:23.763910  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:20:23.763944  301384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:20:23.801895  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 20:20:23.801944  301384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 20:20:23.896594  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:20:23.925700  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 20:20:23.925741  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 20:20:23.995700  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 20:20:23.995743  301384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 20:20:24.101367  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:20:24.101400  301384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:20:24.164927  301384 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:24.164967  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 20:20:24.173745  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:20:24.173785  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 20:20:24.192180  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:24.228446  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:20:24.411356  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:20:24.416054  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 20:20:24.416087  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 20:20:24.853520  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 20:20:24.853549  301384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 20:20:25.047231  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 20:20:25.047264  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 20:20:25.410167  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 20:20:25.410199  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 20:20:25.653124  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:20:25.653168  301384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 20:20:26.040044  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:20:26.580253  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.2465758s)
	I1205 20:20:26.580268  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.203005071s)
	I1205 20:20:26.580318  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580331  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580344  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580361  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580665  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.580681  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.580785  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.580720  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.580836  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.580847  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580861  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580849  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580903  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580683  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.581082  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.581093  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.582711  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.582724  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.582745  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.658891  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.249208421s)
	I1205 20:20:27.658931  301384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.243899975s)
	I1205 20:20:27.658970  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:27.658985  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:27.659011  301384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.243930624s)
	I1205 20:20:27.659044  301384 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:20:27.659416  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:27.659449  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:27.659458  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.659467  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:27.659479  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:27.659799  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:27.659815  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.660098  301384 node_ready.go:35] waiting up to 6m0s for node "addons-523528" to be "Ready" ...
	I1205 20:20:27.667124  301384 node_ready.go:49] node "addons-523528" has status "Ready":"True"
	I1205 20:20:27.667154  301384 node_ready.go:38] duration metric: took 7.030782ms for node "addons-523528" to be "Ready" ...
	I1205 20:20:27.667168  301384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:20:27.747733  301384 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:28.172957  301384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-523528" context rescaled to 1 replicas
	I1205 20:20:29.938833  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 20:20:29.938880  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:29.942562  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:29.942666  301384 pod_ready.go:103] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:29.943037  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:29.943066  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:29.943285  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:29.943502  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:29.943676  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:29.943841  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:30.492289  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 20:20:30.588032  301384 addons.go:234] Setting addon gcp-auth=true in "addons-523528"
	I1205 20:20:30.588107  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:30.588525  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:30.588563  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:30.606388  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I1205 20:20:30.606974  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:30.607475  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:30.607506  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:30.607873  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:30.608365  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:30.608397  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:30.625629  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
	I1205 20:20:30.626322  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:30.626921  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:30.626945  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:30.627428  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:30.627636  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:30.629463  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:30.629730  301384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 20:20:30.629756  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:30.632712  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:30.633148  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:30.633185  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:30.633405  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:30.633586  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:30.633736  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:30.633892  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:31.748453  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.258444545s)
	I1205 20:20:31.748488  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.253338676s)
	I1205 20:20:31.748510  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748522  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748533  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748547  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748590  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.250619477s)
	I1205 20:20:31.748624  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.213792079s)
	I1205 20:20:31.748632  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748645  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748662  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.197989723s)
	I1205 20:20:31.748688  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748705  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748650  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748723  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748734  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.852113907s)
	I1205 20:20:31.748749  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748704  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.136436798s)
	I1205 20:20:31.748760  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748773  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748781  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748856  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.55663326s)
	W1205 20:20:31.748907  301384 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:20:31.748933  301384 retry.go:31] will retry after 133.460987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:20:31.749006  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.520519539s)
	I1205 20:20:31.749037  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749049  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749151  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.337757788s)
	I1205 20:20:31.749174  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749184  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749345  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749368  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749381  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749389  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749493  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749516  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749522  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749529  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749535  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749618  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749634  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749662  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749667  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749674  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749680  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750001  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750016  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750026  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750034  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750046  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750060  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750080  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750087  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750100  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750106  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750107  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750114  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750117  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750120  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750155  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750170  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750188  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750193  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750200  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750205  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750338  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750366  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750373  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750380  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750386  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750472  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750473  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750484  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750493  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750496  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750499  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750503  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750524  301384 addons.go:475] Verifying addon ingress=true in "addons-523528"
	I1205 20:20:31.750014  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751444  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751447  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.751466  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.751472  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751495  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.751502  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753125  301384 out.go:177] * Verifying ingress addon...
	I1205 20:20:31.753562  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.753596  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753603  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753716  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753727  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753735  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.753742  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.753837  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.753865  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753872  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753925  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753939  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.754901  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.754935  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.754943  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.754953  301384 addons.go:475] Verifying addon metrics-server=true in "addons-523528"
	I1205 20:20:31.755470  301384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 20:20:31.753952  301384 addons.go:475] Verifying addon registry=true in "addons-523528"
	I1205 20:20:31.756243  301384 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-523528 service yakd-dashboard -n yakd-dashboard
	
	I1205 20:20:31.757231  301384 out.go:177] * Verifying registry addon...
	I1205 20:20:31.759795  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 20:20:31.770545  301384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 20:20:31.770572  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:31.783747  301384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:20:31.783774  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:31.784823  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.784841  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.785175  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.785194  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 20:20:31.785295  301384 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 20:20:31.788514  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.788547  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.788844  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.788894  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.788913  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.883226  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:32.268112  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:32.278150  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:32.284379  301384 pod_ready.go:103] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:32.861233  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:32.863528  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.279441  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:33.279687  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.519151  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.479017617s)
	I1205 20:20:33.519212  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:33.519230  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:33.519255  301384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.889494058s)
	I1205 20:20:33.519665  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:33.519690  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:33.519700  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:33.519709  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:33.519711  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:33.519995  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:33.520030  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:33.520041  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:33.520073  301384 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-523528"
	I1205 20:20:33.520905  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:33.521757  301384 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 20:20:33.523464  301384 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 20:20:33.524313  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 20:20:33.524768  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 20:20:33.524786  301384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 20:20:33.548901  301384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:20:33.548937  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:33.616173  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 20:20:33.616204  301384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 20:20:33.721687  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:20:33.721714  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 20:20:33.767230  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.769575  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:33.810018  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:20:34.014481  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.131162493s)
	I1205 20:20:34.014565  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.014586  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.014928  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.014951  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.014963  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.014972  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.014985  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.015266  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.015296  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.030753  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:34.254434  301384 pod_ready.go:93] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.254463  301384 pod_ready.go:82] duration metric: took 6.50668149s for pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.254476  301384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.262618  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:34.265128  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:34.530430  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:34.781930  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:34.782123  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:34.800916  301384 pod_ready.go:98] pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.217 HostIPs:[{IP:192.168.39
.217}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-05 20:20:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-05 20:20:28 +0000 UTC,FinishedAt:2024-12-05 20:20:34 +0000 UTC,ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a Started:0xc002b0e1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a56920} {Name:kube-api-access-lhh9d MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002a56930}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 20:20:34.800950  301384 pod_ready.go:82] duration metric: took 546.466009ms for pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace to be "Ready" ...
	E1205 20:20:34.800966  301384 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.217 HostIPs:[{IP:192.168.39.217}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-05 20:20:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-05 20:20:28 +0000 UTC,FinishedAt:2024-12-05 20:20:34 +0000 UTC,ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a Started:0xc002b0e1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a56920} {Name:kube-api-access-lhh9d MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc002a56930}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 20:20:34.800979  301384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.828197  301384 pod_ready.go:93] pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.828239  301384 pod_ready.go:82] duration metric: took 27.249622ms for pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.828258  301384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.845233  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.035154079s)
	I1205 20:20:34.845303  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.845346  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.845706  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.845730  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.845745  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.845754  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.845754  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.846046  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.846111  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.846139  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.848495  301384 addons.go:475] Verifying addon gcp-auth=true in "addons-523528"
	I1205 20:20:34.850422  301384 out.go:177] * Verifying gcp-auth addon...
	I1205 20:20:34.852543  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 20:20:34.855921  301384 pod_ready.go:93] pod "etcd-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.855958  301384 pod_ready.go:82] duration metric: took 27.691377ms for pod "etcd-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.855975  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.870708  301384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 20:20:34.870742  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:34.898924  301384 pod_ready.go:93] pod "kube-apiserver-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.898969  301384 pod_ready.go:82] duration metric: took 42.984063ms for pod "kube-apiserver-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.898988  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.030372  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:35.053182  301384 pod_ready.go:93] pod "kube-controller-manager-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.053207  301384 pod_ready.go:82] duration metric: took 154.209666ms for pod "kube-controller-manager-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.053221  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xsvp" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.261705  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:35.264347  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:35.360120  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:35.451970  301384 pod_ready.go:93] pod "kube-proxy-8xsvp" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.451999  301384 pod_ready.go:82] duration metric: took 398.771201ms for pod "kube-proxy-8xsvp" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.452013  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.529979  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:35.759884  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:35.763522  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:35.852104  301384 pod_ready.go:93] pod "kube-scheduler-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.852138  301384 pod_ready.go:82] duration metric: took 400.115802ms for pod "kube-scheduler-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.852152  301384 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.855297  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:36.029561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:36.260409  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:36.264734  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:36.356965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:36.528640  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:36.761057  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:36.763513  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:36.857338  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.030059  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:37.260256  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:37.263328  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:37.356684  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.528487  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:37.760263  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:37.763062  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:37.855818  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.858386  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:38.029924  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:38.260323  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:38.263073  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:38.357689  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:38.529154  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:38.762136  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:38.763959  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:38.856670  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.032530  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:39.259806  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:39.263484  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:39.357363  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.530057  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:39.760297  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:39.763227  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:39.856740  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.859837  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:40.028786  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:40.259825  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:40.262657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:40.357076  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:40.529965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:40.761640  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:40.763310  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:40.866993  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:41.029353  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:41.270034  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:41.270199  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:41.357604  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:41.529306  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:41.760554  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:41.763820  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:41.858383  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:42.028357  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:42.259389  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:42.263572  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:42.361242  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:42.367817  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:42.528993  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:42.761582  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:42.769290  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:42.856603  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:43.030926  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:43.261784  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:43.264335  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:43.357395  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:43.530218  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:43.761629  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:43.764481  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:43.858158  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.029705  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:44.259690  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:44.262996  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:44.356450  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.528807  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:44.760384  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:44.763632  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:44.856765  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.858659  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:45.028998  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:45.261107  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:45.263102  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:45.357564  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:45.528997  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:45.765819  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:45.766106  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:45.856267  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.030119  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:46.260808  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:46.264086  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:46.356138  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.529743  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:46.760677  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:46.763344  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:46.858044  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.859794  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:47.265148  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:47.265235  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:47.270246  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:47.370103  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:47.530949  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:47.764802  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:47.766657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:47.859913  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:48.029049  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:48.259696  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:48.263791  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:48.356922  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:48.757790  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:48.759868  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:48.763988  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:48.856282  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:49.028784  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:49.260517  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:49.263650  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:49.362308  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:49.364680  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:49.530645  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:49.760576  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:49.763894  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:49.858976  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:50.030000  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:50.261462  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:50.264696  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:50.356455  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:50.529095  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:50.809464  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:50.810412  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:50.856208  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.029176  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:51.259979  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:51.263026  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:51.357017  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.529130  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:51.759888  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:51.762886  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:51.856819  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.859926  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:52.028366  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:52.669185  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:52.669734  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:52.670510  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:52.673426  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:52.760082  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:52.763258  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:52.859929  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.031079  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:53.260190  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:53.262892  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:53.359838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.529468  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:53.760831  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:53.763099  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:53.856238  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.858867  301384 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:53.858894  301384 pod_ready.go:82] duration metric: took 18.006733716s for pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:53.858913  301384 pod_ready.go:39] duration metric: took 26.191731772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:20:53.858934  301384 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:20:53.858996  301384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:20:53.876428  301384 api_server.go:72] duration metric: took 31.060707865s to wait for apiserver process to appear ...
	I1205 20:20:53.876459  301384 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:20:53.876486  301384 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1205 20:20:53.882114  301384 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I1205 20:20:53.883268  301384 api_server.go:141] control plane version: v1.31.2
	I1205 20:20:53.883317  301384 api_server.go:131] duration metric: took 6.851237ms to wait for apiserver health ...
	I1205 20:20:53.883327  301384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:20:53.891447  301384 system_pods.go:59] 18 kube-system pods found
	I1205 20:20:53.891485  301384 system_pods.go:61] "amd-gpu-device-plugin-lqd4k" [f46b3c00-0342-4d3b-9da8-6ee596f1cf6d] Running
	I1205 20:20:53.891490  301384 system_pods.go:61] "coredns-7c65d6cfc9-gdmlk" [35f95488-64f0-48f3-ab99-31fd21a11d75] Running
	I1205 20:20:53.891497  301384 system_pods.go:61] "csi-hostpath-attacher-0" [84675000-bf37-46d9-ab6a-6e1cb4781e25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 20:20:53.891503  301384 system_pods.go:61] "csi-hostpath-resizer-0" [a18ef2ee-6053-4bee-a9e0-8ed83cc2e964] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 20:20:53.891515  301384 system_pods.go:61] "csi-hostpathplugin-nr8m4" [a3e89b1e-6a83-4dd9-a487-29437e9207a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 20:20:53.891521  301384 system_pods.go:61] "etcd-addons-523528" [f39a6fce-120f-4e27-9c83-9449df5e8bb2] Running
	I1205 20:20:53.891525  301384 system_pods.go:61] "kube-apiserver-addons-523528" [fca0c214-9dd9-4258-8c17-24e277f7a7ea] Running
	I1205 20:20:53.891528  301384 system_pods.go:61] "kube-controller-manager-addons-523528" [78d2b799-2b90-4804-a492-db458a02fc3f] Running
	I1205 20:20:53.891532  301384 system_pods.go:61] "kube-ingress-dns-minikube" [bfef9808-b7b4-4319-ad26-b776fb27fc60] Running
	I1205 20:20:53.891536  301384 system_pods.go:61] "kube-proxy-8xsvp" [f3eb3bb7-a01c-4223-8a16-2a0ebe48726e] Running
	I1205 20:20:53.891540  301384 system_pods.go:61] "kube-scheduler-addons-523528" [8ba95eac-83e9-4a8b-bc3a-73ec04e33a78] Running
	I1205 20:20:53.891545  301384 system_pods.go:61] "metrics-server-84c5f94fbc-9sfj2" [4fb71d12-56fb-4616-bee4-29859c9f2a05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:20:53.891548  301384 system_pods.go:61] "nvidia-device-plugin-daemonset-sglbw" [0360c661-774c-46ac-a3df-fd26eb882587] Running
	I1205 20:20:53.891553  301384 system_pods.go:61] "registry-66c9cd494c-6p9nr" [911c9fc9-5e67-4b4f-846e-2ad1cdc944c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 20:20:53.891563  301384 system_pods.go:61] "registry-proxy-zpfrw" [d071b7d1-01c6-4449-98a5-0e329f71db8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 20:20:53.891569  301384 system_pods.go:61] "snapshot-controller-56fcc65765-6gsm9" [92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.891577  301384 system_pods.go:61] "snapshot-controller-56fcc65765-jpbk8" [aa2eda09-0153-4349-8efe-c65537dbe04d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.891581  301384 system_pods.go:61] "storage-provisioner" [6f2b9e6f-6263-4a11-b2bf-725c25ab3f00] Running
	I1205 20:20:53.891590  301384 system_pods.go:74] duration metric: took 8.257905ms to wait for pod list to return data ...
	I1205 20:20:53.891601  301384 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:20:53.894395  301384 default_sa.go:45] found service account: "default"
	I1205 20:20:53.894428  301384 default_sa.go:55] duration metric: took 2.816032ms for default service account to be created ...
	I1205 20:20:53.894441  301384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:20:53.902455  301384 system_pods.go:86] 18 kube-system pods found
	I1205 20:20:53.902493  301384 system_pods.go:89] "amd-gpu-device-plugin-lqd4k" [f46b3c00-0342-4d3b-9da8-6ee596f1cf6d] Running
	I1205 20:20:53.902502  301384 system_pods.go:89] "coredns-7c65d6cfc9-gdmlk" [35f95488-64f0-48f3-ab99-31fd21a11d75] Running
	I1205 20:20:53.902511  301384 system_pods.go:89] "csi-hostpath-attacher-0" [84675000-bf37-46d9-ab6a-6e1cb4781e25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 20:20:53.902518  301384 system_pods.go:89] "csi-hostpath-resizer-0" [a18ef2ee-6053-4bee-a9e0-8ed83cc2e964] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 20:20:53.902525  301384 system_pods.go:89] "csi-hostpathplugin-nr8m4" [a3e89b1e-6a83-4dd9-a487-29437e9207a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 20:20:53.902530  301384 system_pods.go:89] "etcd-addons-523528" [f39a6fce-120f-4e27-9c83-9449df5e8bb2] Running
	I1205 20:20:53.902535  301384 system_pods.go:89] "kube-apiserver-addons-523528" [fca0c214-9dd9-4258-8c17-24e277f7a7ea] Running
	I1205 20:20:53.902539  301384 system_pods.go:89] "kube-controller-manager-addons-523528" [78d2b799-2b90-4804-a492-db458a02fc3f] Running
	I1205 20:20:53.902549  301384 system_pods.go:89] "kube-ingress-dns-minikube" [bfef9808-b7b4-4319-ad26-b776fb27fc60] Running
	I1205 20:20:53.902553  301384 system_pods.go:89] "kube-proxy-8xsvp" [f3eb3bb7-a01c-4223-8a16-2a0ebe48726e] Running
	I1205 20:20:53.902557  301384 system_pods.go:89] "kube-scheduler-addons-523528" [8ba95eac-83e9-4a8b-bc3a-73ec04e33a78] Running
	I1205 20:20:53.902562  301384 system_pods.go:89] "metrics-server-84c5f94fbc-9sfj2" [4fb71d12-56fb-4616-bee4-29859c9f2a05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:20:53.902567  301384 system_pods.go:89] "nvidia-device-plugin-daemonset-sglbw" [0360c661-774c-46ac-a3df-fd26eb882587] Running
	I1205 20:20:53.902572  301384 system_pods.go:89] "registry-66c9cd494c-6p9nr" [911c9fc9-5e67-4b4f-846e-2ad1cdc944c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 20:20:53.902577  301384 system_pods.go:89] "registry-proxy-zpfrw" [d071b7d1-01c6-4449-98a5-0e329f71db8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 20:20:53.902583  301384 system_pods.go:89] "snapshot-controller-56fcc65765-6gsm9" [92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.902589  301384 system_pods.go:89] "snapshot-controller-56fcc65765-jpbk8" [aa2eda09-0153-4349-8efe-c65537dbe04d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.902593  301384 system_pods.go:89] "storage-provisioner" [6f2b9e6f-6263-4a11-b2bf-725c25ab3f00] Running
	I1205 20:20:53.902602  301384 system_pods.go:126] duration metric: took 8.154884ms to wait for k8s-apps to be running ...
	I1205 20:20:53.902613  301384 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:20:53.902663  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:20:53.918116  301384 system_svc.go:56] duration metric: took 15.490461ms WaitForService to wait for kubelet
	I1205 20:20:53.918150  301384 kubeadm.go:582] duration metric: took 31.102437332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:20:53.918172  301384 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:20:53.921623  301384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:20:53.921681  301384 node_conditions.go:123] node cpu capacity is 2
	I1205 20:20:53.921700  301384 node_conditions.go:105] duration metric: took 3.522361ms to run NodePressure ...
	I1205 20:20:53.921718  301384 start.go:241] waiting for startup goroutines ...
	I1205 20:20:54.029644  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:54.260164  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:54.263061  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:54.355807  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:54.529526  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:54.760060  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:54.763186  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:54.856207  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:55.029331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:55.259480  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:55.264036  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:55.357437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:55.529543  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:55.759924  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:55.762657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:55.856457  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:56.030131  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:56.260146  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:56.263422  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:56.356287  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:56.529047  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:56.759708  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:56.762874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:56.856782  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:57.029562  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:57.259888  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:57.263835  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:57.358414  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:57.530883  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:57.762236  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:57.764838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:57.855988  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:58.029412  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:58.260710  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:58.264157  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:58.356624  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:58.529983  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:58.768139  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:58.768313  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:58.856647  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:59.029699  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:59.260905  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:59.263598  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:59.360181  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:59.529437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:59.761065  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:59.763389  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:59.857314  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:00.029770  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:00.260470  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:00.263697  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:00.356605  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:00.529444  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:00.759979  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:00.763229  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:00.856099  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:01.029561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:01.260400  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:01.263228  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:01.360733  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:01.528709  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:01.761950  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:01.764628  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:01.857430  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:02.030671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:02.259928  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:02.263273  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:02.356232  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:02.529706  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:02.760962  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:02.764091  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:02.856511  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:03.029702  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:03.260604  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:03.263106  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:03.356876  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:03.529981  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:03.760484  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:03.763922  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:03.857201  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:04.029850  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:04.259913  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:04.263355  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:04.356250  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:04.529700  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:04.761088  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:04.763286  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:04.856727  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:05.028943  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:05.262932  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:05.264793  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:05.356739  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:05.529249  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:05.760609  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:05.764026  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:05.857324  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:06.033205  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:06.259181  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:06.263621  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:06.357191  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:06.529552  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:06.760458  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:06.763571  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:06.856538  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:07.030517  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:07.261446  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:07.263671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:07.357744  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:07.529562  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:07.760151  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:07.763185  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:07.856561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:08.029202  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:08.259760  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:08.263166  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:08.356242  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:08.534029  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:08.761228  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:08.763264  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:08.856874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:09.029133  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:09.261019  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:09.264640  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:09.356411  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:09.530660  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:09.760286  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:09.763275  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:09.856332  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:10.029588  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:10.261501  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:10.264665  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:10.358045  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:10.528893  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:10.761357  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:10.763023  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:10.855875  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:11.030212  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:11.259553  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:11.263097  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:11.356057  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:11.529422  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:11.760425  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:11.763811  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:11.856560  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:12.030437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:12.260390  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:12.263965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:12.356707  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:12.529028  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:12.759528  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:12.764143  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:12.856698  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:13.029405  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:13.260505  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:13.263736  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:13.356584  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:13.709602  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:13.761367  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:13.765119  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:13.856941  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:14.029549  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:14.260350  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:14.263692  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:14.356471  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:14.529750  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:14.760298  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:14.763017  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:14.855672  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:15.054838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:15.261839  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:15.264456  301384 kapi.go:107] duration metric: took 43.504658099s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 20:21:15.356144  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:15.529331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:15.759913  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:15.860349  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:16.030574  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:16.260938  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:16.357545  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:16.530846  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:16.760652  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:16.856373  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:17.030373  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:17.260361  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:17.355938  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:17.531662  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:17.760985  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:17.856033  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:18.029760  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:18.260331  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:18.355561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:18.529204  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:18.759738  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:18.855767  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:19.029690  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:19.259345  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:19.356812  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:19.529329  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.144827  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:20.145458  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.145867  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.260407  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.356816  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:20.529118  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.760628  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.864461  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:21.029782  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:21.260445  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:21.363347  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:21.529991  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:21.760086  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:21.856874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:22.029172  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:22.261687  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:22.358570  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:22.528751  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:22.761826  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:22.857545  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:23.032518  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:23.260623  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:23.359845  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:23.534466  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:23.759411  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:23.855676  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:24.028302  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:24.260934  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:24.356477  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:24.530941  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:24.761399  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:24.856614  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:25.030215  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:25.260231  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:25.356752  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:25.529233  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:25.765006  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:25.857337  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:26.033018  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:26.261210  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:26.356811  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:26.532496  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:26.761894  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:26.856627  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:27.029067  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:27.260555  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:27.357239  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:27.530586  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:27.760564  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:27.856761  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:28.033294  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:28.260468  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:28.361650  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:28.532653  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:28.760231  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:28.857048  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:29.029104  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:29.260114  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:29.357028  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:29.530513  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:29.760841  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:29.856132  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:30.029448  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:30.260004  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:30.355775  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:30.528768  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:30.760219  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:30.857044  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:31.033122  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:31.262953  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:31.356540  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:31.530161  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:31.759933  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:31.856331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:32.053927  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:32.264574  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:32.363823  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:32.529308  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:32.760430  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:32.855606  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:33.029752  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:33.259470  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:33.356738  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:33.530883  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:33.760996  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:33.857040  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:34.029634  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:34.263030  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:34.357009  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:34.529408  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:34.759559  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:34.855690  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:35.028596  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:35.259944  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:35.356638  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:35.528770  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:35.760288  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:35.857327  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:36.032569  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:36.260070  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:36.356931  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:36.528839  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:36.761051  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:36.856803  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:37.028842  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:37.260565  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:37.356671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:37.534654  301384 kapi.go:107] duration metric: took 1m4.010332572s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 20:21:37.761421  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:37.857166  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:38.262785  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:38.356328  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:38.760651  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:38.857475  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.260155  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:39.367484  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.908404  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.908725  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.261219  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.360465  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:40.760501  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.856149  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:41.260334  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:41.356085  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:41.760983  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:41.857127  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:42.260903  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:42.356967  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:42.761358  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:42.860826  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:43.260694  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:43.356224  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:43.762746  301384 kapi.go:107] duration metric: took 1m12.007274266s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 20:21:43.859528  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:44.359035  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:44.856330  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:45.356709  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:45.856711  301384 kapi.go:107] duration metric: took 1m11.004161155s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 20:21:45.858506  301384 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-523528 cluster.
	I1205 20:21:45.860004  301384 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 20:21:45.861351  301384 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 20:21:45.862757  301384 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1205 20:21:45.864013  301384 addons.go:510] duration metric: took 1m23.048296201s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin inspektor-gadget cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1205 20:21:45.864087  301384 start.go:246] waiting for cluster config update ...
	I1205 20:21:45.864113  301384 start.go:255] writing updated cluster config ...
	I1205 20:21:45.864420  301384 ssh_runner.go:195] Run: rm -f paused
	I1205 20:21:45.919061  301384 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:21:45.920777  301384 out.go:177] * Done! kubectl is now configured to use "addons-523528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.125458052Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9e48e728-414b-4824-85f7-df765572be8e name=/runtime.v1.RuntimeService/Version
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.126487841Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=527d7826-20de-4e29-8c0a-5ec05faaeeef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.127776116Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430320127743567,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=527d7826-20de-4e29-8c0a-5ec05faaeeef name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.128302664Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1b7153c5-d5ca-40ed-ac47-b78b6d913b32 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.128364324Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1b7153c5-d5ca-40ed-ac47-b78b6d913b32 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.128709737Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db211624f4b11328328f61da9e9a5e92352c8b87dbfb0ed31fbd4fe8b0ce3e59,PodSandboxId:002f47f258553dcd316b722f4e52a2b6d72feab68c876b95ce9268ed821356fa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733430102677346305,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cwc99,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb76a1f1-a84f-4fe4-874a-116a294be8d8,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1d0dc2039b30511564c4cf80c93ec9f7daaf9b0b8dc147b966fdbb9e4ca6521c,PodSandboxId:04474daefa52e843802b4cd00f4e2b459f08412e05ed115f45812e39c3549d2b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733430083383506585,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2gff4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8f92c1a-9322-4234-a976-29db667b0322,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465499bcf5573f091de9a68b2ab1aa1ba122170d9c84f3db29542fdc2f6276f9,PodSandboxId:1e3351e4f9141432c65748dfbb8994e2244b9746ed874c99b33c318fa43c9402,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733430082952082267,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vm87g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e39eb3a6-034d-44d6-a880-4b5b5a62e8fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db24ed41125f67067e34568bce7fdd097ed327887b43910f0029fd0d5d0117,PodSandboxId:8bb347e9109c42edc9dd1b2383de21f7f54d5e6a6b42a3033340ae3d80d16077,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attemp
t:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733430040848603862,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfef9808-b7b4-4319-ad26-b776fb27fc60,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,P
odSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc
824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d
17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430027481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1b7153c5-d5ca-40ed-ac47-b78b6d913b32 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.135349143Z" level=debug msg="Content-Type from manifest GET is \"application/vnd.docker.distribution.manifest.v2+json\"" file="docker/docker_client.go:964"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.135531254Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]docker.io/kicbase/echo-server:1.0\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136525963Z" level=debug msg="Using registries.d directory /etc/containers/registries.d" file="docker/registries_d.go:80"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136599900Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\"" file="docker/docker_image_src.go:87"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136641590Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /run/containers/0/auth.json" file="config/config.go:846"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136671219Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.config/containers/auth.json" file="config/config.go:846"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136698430Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.docker/config.json" file="config/config.go:846"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136726722Z" level=debug msg="No credentials matching docker.io/kicbase/echo-server found in /root/.dockercfg" file="config/config.go:846"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136749715Z" level=debug msg="No credentials for docker.io/kicbase/echo-server found" file="config/config.go:272"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136780553Z" level=debug msg=" No signature storage configuration found for docker.io/kicbase/echo-server:1.0, using built-in default file:///var/lib/containers/sigstore" file="docker/registries_d.go:176"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136814887Z" level=debug msg="Looking for TLS certificates and private keys in /etc/docker/certs.d/docker.io" file="tlsclientconfig/tlsclientconfig.go:20"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.136867872Z" level=debug msg="GET https://registry-1.docker.io/v2/" file="docker/docker_client.go:631"
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.162497345Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d57177cc-a73d-49e7-8708-8ceeedb6d9cf name=/runtime.v1.RuntimeService/Version
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.162573276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d57177cc-a73d-49e7-8708-8ceeedb6d9cf name=/runtime.v1.RuntimeService/Version
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.163571413Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b649ae65-9fca-4190-80b4-5678c3c362c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.165200331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430320165132455,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b649ae65-9fca-4190-80b4-5678c3c362c2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.165828004Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=85f601f6-c374-48cf-9ce4-c18831ac1f60 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.165914306Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=85f601f6-c374-48cf-9ce4-c18831ac1f60 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:25:20 addons-523528 crio[666]: time="2024-12-05 20:25:20.166287231Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernetes.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],
io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.con
tainer.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db211624f4b11328328f61da9e9a5e92352c8b87dbfb0ed31fbd4fe8b0ce3e59,PodSandboxId:002f47f258553dcd316b722f4e52a2b6d72feab68c876b95ce9268ed821356fa,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ee44bc2368033ba6147d84fb376356de1e40e4778c20dd8b4817bd1636121ddf,State:CONTAINER_RUNNING,CreatedAt:1733430102677346305,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-5f85ff4588-cwc99,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: cb76a1f1-a84f-4fe4-874a-116a294be8d8,},Annotations:map[string]string{io.kubernetes.
container.hash: 4e8eee94,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:1d0dc2039b30511564c4cf80c93ec9f7daaf9b0b8dc147b966fdbb9e4ca6521c,PodSandboxId:04474daefa52e843802b4cd00f4e2b459f08412e05ed115f45812e39c3549d2b,Metadata:&ContainerMetadata{Name:patch,Attempt:1,},Image:&ImageSpec{Image:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb,St
ate:CONTAINER_EXITED,CreatedAt:1733430083383506585,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-2gff4,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f8f92c1a-9322-4234-a976-29db667b0322,},Annotations:map[string]string{io.kubernetes.container.hash: 3f610496,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:465499bcf5573f091de9a68b2ab1aa1ba122170d9c84f3db29542fdc2f6276f9,PodSandboxId:1e3351e4f9141432c65748dfbb8994e2244b9746ed874c99b33c318fa43c9402,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a62eeff05ba5194cac31b3f61806552
90afa3ed3f2573bcd2aaff319416951eb,State:CONTAINER_EXITED,CreatedAt:1733430082952082267,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-vm87g,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: e39eb3a6-034d-44d6-a880-4b5b5a62e8fd,},Annotations:map[string]string{io.kubernetes.container.hash: fe18a2bf,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,}
,ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provision
er@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]string{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:49db24ed41125f67067e34568bce7fdd097ed327887b43910f0029fd0d5d0117,PodSandboxId:8bb347e9109c42edc9dd1b2383de21f7f54d5e6a6b42a3033340ae3d80d16077,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attemp
t:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:30dd67412fdea30479de8d5d9bf760870308d24d911c59ea1f1757f04c33cc29,State:CONTAINER_RUNNING,CreatedAt:1733430040848603862,Labels:map[string]string{io.kubernetes.container.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bfef9808-b7b4-4319-ad26-b776fb27fc60,},Annotations:map[string]string{io.kubernetes.container.hash: 8778d474,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,P
odSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_RUNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc
824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d
17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430027481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/terminatio
n-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernet
es.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,i
o.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.cont
ainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kuberne
tes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=85f601f6-c374-48cf-9ce4-c18831ac1f60 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	675ee915a4bfc       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago       Running             nginx                     0                   b3375052980f1       nginx
	567826f740b5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          3 minutes ago       Running             busybox                   0                   59491a34c5cc8       busybox
	db211624f4b11       registry.k8s.io/ingress-nginx/controller@sha256:62b61c42ec8dd877b85c0aa24c4744ce44d274bc16cc5d2364edfe67964ba55b             3 minutes ago       Running             controller                0                   002f47f258553       ingress-nginx-controller-5f85ff4588-cwc99
	1d0dc2039b305       a62eeff05ba5194cac31b3f6180655290afa3ed3f2573bcd2aaff319416951eb                                                             3 minutes ago       Exited              patch                     1                   04474daefa52e       ingress-nginx-admission-patch-2gff4
	465499bcf5573       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:a9f03b34a3cbfbb26d103a14046ab2c5130a80c3d69d526ff8063d2b37b9fd3f   3 minutes ago       Exited              create                    0                   1e3351e4f9141       ingress-nginx-admission-create-vm87g
	9f9ebe589c0b2       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        4 minutes ago       Running             metrics-server            0                   7401d4bb9b521       metrics-server-84c5f94fbc-9sfj2
	4ebc8ffb53db3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             4 minutes ago       Running             local-path-provisioner    0                   cf2b0a84bd0b7       local-path-provisioner-86d989889c-9w5dg
	49db24ed41125       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             4 minutes ago       Running             minikube-ingress-dns      0                   8bb347e9109c4       kube-ingress-dns-minikube
	f79e153973b68       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                     4 minutes ago       Running             amd-gpu-device-plugin     0                   743e9f09175b5       amd-gpu-device-plugin-lqd4k
	15fee7cca3939       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   a031650e9ba69       storage-provisioner
	50ba7113ce2d7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             4 minutes ago       Running             coredns                   0                   560cea4ce979f       coredns-7c65d6cfc9-gdmlk
	828f72fb056dc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                             4 minutes ago       Running             kube-proxy                0                   52d1f5e64034b       kube-proxy-8xsvp
	eb3908ffdd516       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                             5 minutes ago       Running             kube-controller-manager   0                   b50b25de86f7b       kube-controller-manager-addons-523528
	11f971c7ed91d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                             5 minutes ago       Running             kube-scheduler            0                   07109a4c0f140       kube-scheduler-addons-523528
	4b4009ae66cf1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                             5 minutes ago       Running             kube-apiserver            0                   db1f5053a207c       kube-apiserver-addons-523528
	5a19ace4d5186       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             5 minutes ago       Running             etcd                      0                   3e634d2057ad7       etcd-addons-523528
	
	
	==> coredns [50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734] <==
	[INFO] 10.244.0.8:51032 - 16133 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 109 false 1232" NXDOMAIN qr,aa,rd 179 0.000124468s
	[INFO] 10.244.0.8:51032 - 50526 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000142161s
	[INFO] 10.244.0.8:51032 - 36965 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 85 false 1232" NXDOMAIN qr,aa,rd 167 0.000467534s
	[INFO] 10.244.0.8:51032 - 7151 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.00013111s
	[INFO] 10.244.0.8:51032 - 3550 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 81 false 1232" NXDOMAIN qr,aa,rd 163 0.000073161s
	[INFO] 10.244.0.8:51032 - 51237 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000095952s
	[INFO] 10.244.0.8:51032 - 41345 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000075104s
	[INFO] 10.244.0.8:54793 - 40182 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000130037s
	[INFO] 10.244.0.8:54793 - 39895 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000036384s
	[INFO] 10.244.0.8:42760 - 437 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000061352s
	[INFO] 10.244.0.8:42760 - 161 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000145013s
	[INFO] 10.244.0.8:43748 - 37781 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00010146s
	[INFO] 10.244.0.8:43748 - 37540 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000116377s
	[INFO] 10.244.0.8:50269 - 53608 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00008227s
	[INFO] 10.244.0.8:50269 - 53799 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000164186s
	[INFO] 10.244.0.23:42353 - 6139 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000320244s
	[INFO] 10.244.0.23:56750 - 19909 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000133373s
	[INFO] 10.244.0.23:45651 - 38067 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000121493s
	[INFO] 10.244.0.23:37801 - 40855 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000059558s
	[INFO] 10.244.0.23:51887 - 50259 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095321s
	[INFO] 10.244.0.23:59778 - 6547 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000059274s
	[INFO] 10.244.0.23:58229 - 7943 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.00125471s
	[INFO] 10.244.0.23:46215 - 55024 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001341824s
	[INFO] 10.244.0.26:48073 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00035668s
	[INFO] 10.244.0.26:44843 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000213795s
	
	
	==> describe nodes <==
	Name:               addons-523528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-523528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=addons-523528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_20_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-523528
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-523528
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:25:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:23:21 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:23:21 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:23:21 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:23:21 +0000   Thu, 05 Dec 2024 20:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-523528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dab8c2ee7ea4d5285a568609f97c654
	  System UUID:                0dab8c2e-e7ea-4d52-85a5-68609f97c654
	  Boot ID:                    149d5c20-f5be-44d7-ae12-e0ccca1b452d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m32s
	  default                     hello-world-app-55bf9c44b4-h8h8x             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-cwc99    100m (5%)     0 (0%)      90Mi (2%)        0 (0%)         4m49s
	  kube-system                 amd-gpu-device-plugin-lqd4k                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m56s
	  kube-system                 coredns-7c65d6cfc9-gdmlk                     100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     4m58s
	  kube-system                 etcd-addons-523528                           100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         5m3s
	  kube-system                 kube-apiserver-addons-523528                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-controller-manager-addons-523528        200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m3s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-proxy-8xsvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m58s
	  kube-system                 kube-scheduler-addons-523528                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m4s
	  kube-system                 metrics-server-84c5f94fbc-9sfj2              100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         4m52s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m53s
	  local-path-storage          local-path-provisioner-86d989889c-9w5dg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             460Mi (12%)  170Mi (4%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m53s  kube-proxy       
	  Normal  Starting                 5m3s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  5m3s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m3s   kubelet          Node addons-523528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m3s   kubelet          Node addons-523528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m3s   kubelet          Node addons-523528 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m2s   kubelet          Node addons-523528 status is now: NodeReady
	  Normal  RegisteredNode           4m59s  node-controller  Node addons-523528 event: Registered Node addons-523528 in Controller
	
	
	==> dmesg <==
	[  +5.989552] systemd-fstab-generator[1197]: Ignoring "noauto" option for root device
	[  +0.080552] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.286069] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.145916] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003237] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.010823] kauditd_printk_skb: 125 callbacks suppressed
	[  +7.768168] kauditd_printk_skb: 93 callbacks suppressed
	[Dec 5 20:21] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.119294] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.756877] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.340119] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.418979] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.547031] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.193425] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.654015] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 5 20:22] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.666791] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.109732] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.088388] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.290445] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.244076] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:23] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.736961] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.460946] kauditd_printk_skb: 15 callbacks suppressed
	[Dec 5 20:25] kauditd_printk_skb: 49 callbacks suppressed
	
	
	==> etcd [5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc] <==
	{"level":"info","ts":"2024-12-05T20:21:39.895706Z","caller":"traceutil/trace.go:171","msg":"trace[393603296] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1086; }","duration":"147.069582ms","start":"2024-12-05T20:21:39.748629Z","end":"2024-12-05T20:21:39.895698Z","steps":["trace[393603296] 'agreement among raft nodes before linearized reading'  (duration: 147.037287ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673418Z","caller":"traceutil/trace.go:171","msg":"trace[1542733329] linearizableReadLoop","detail":"{readStateIndex:1283; appliedIndex:1282; }","duration":"241.380976ms","start":"2024-12-05T20:22:07.432024Z","end":"2024-12-05T20:22:07.673405Z","steps":["trace[1542733329] 'read index received'  (duration: 241.255748ms)","trace[1542733329] 'applied index is now lower than readState.Index'  (duration: 124.778µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:22:07.673688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.071748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-12-05T20:22:07.673728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.709041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:2279"}
	{"level":"info","ts":"2024-12-05T20:22:07.673775Z","caller":"traceutil/trace.go:171","msg":"trace[1624501571] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:1241; }","duration":"241.761469ms","start":"2024-12-05T20:22:07.432005Z","end":"2024-12-05T20:22:07.673767Z","steps":["trace[1624501571] 'agreement among raft nodes before linearized reading'  (duration: 241.654879ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673750Z","caller":"traceutil/trace.go:171","msg":"trace[882510960] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1241; }","duration":"196.151172ms","start":"2024-12-05T20:22:07.477590Z","end":"2024-12-05T20:22:07.673741Z","steps":["trace[882510960] 'agreement among raft nodes before linearized reading'  (duration: 196.046676ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673915Z","caller":"traceutil/trace.go:171","msg":"trace[1487310495] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"265.015945ms","start":"2024-12-05T20:22:07.408893Z","end":"2024-12-05T20:22:07.673908Z","steps":["trace[1487310495] 'process raft request'  (duration: 264.428323ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:22:07.674054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.579171ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:22:07.674092Z","caller":"traceutil/trace.go:171","msg":"trace[1748478931] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1241; }","duration":"178.619186ms","start":"2024-12-05T20:22:07.495466Z","end":"2024-12-05T20:22:07.674086Z","steps":["trace[1748478931] 'agreement among raft nodes before linearized reading'  (duration: 178.569019ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:23:06.400849Z","caller":"traceutil/trace.go:171","msg":"trace[110845164] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"135.278484ms","start":"2024-12-05T20:23:06.265553Z","end":"2024-12-05T20:23:06.400832Z","steps":["trace[110845164] 'process raft request'  (duration: 135.185293ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:23:09.190488Z","caller":"traceutil/trace.go:171","msg":"trace[593260288] linearizableReadLoop","detail":"{readStateIndex:1640; appliedIndex:1639; }","duration":"321.809997ms","start":"2024-12-05T20:23:08.868666Z","end":"2024-12-05T20:23:09.190476Z","steps":["trace[593260288] 'read index received'  (duration: 321.68409ms)","trace[593260288] 'applied index is now lower than readState.Index'  (duration: 125.509µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:23:09.190846Z","caller":"traceutil/trace.go:171","msg":"trace[1514010002] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"437.101484ms","start":"2024-12-05T20:23:08.753731Z","end":"2024-12-05T20:23:09.190833Z","steps":["trace[1514010002] 'process raft request'  (duration: 436.659429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.190978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.753715Z","time spent":"437.197697ms","remote":"127.0.0.1:34012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1577 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-05T20:23:09.191139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.468837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:09.191233Z","caller":"traceutil/trace.go:171","msg":"trace[257734438] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1582; }","duration":"322.560998ms","start":"2024-12-05T20:23:08.868661Z","end":"2024-12-05T20:23:09.191222Z","steps":["trace[257734438] 'agreement among raft nodes before linearized reading'  (duration: 322.451886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.191298Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.868629Z","time spent":"322.638454ms","remote":"127.0.0.1:33982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	{"level":"warn","ts":"2024-12-05T20:23:09.191532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.045882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-12-05T20:23:09.191923Z","caller":"traceutil/trace.go:171","msg":"trace[607424640] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1582; }","duration":"300.437316ms","start":"2024-12-05T20:23:08.891478Z","end":"2024-12-05T20:23:09.191916Z","steps":["trace[607424640] 'agreement among raft nodes before linearized reading'  (duration: 299.992802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.191975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.891447Z","time spent":"300.520087ms","remote":"127.0.0.1:34126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-12-05T20:23:35.604283Z","caller":"traceutil/trace.go:171","msg":"trace[1626204079] linearizableReadLoop","detail":"{readStateIndex:1845; appliedIndex:1844; }","duration":"274.451344ms","start":"2024-12-05T20:23:35.329817Z","end":"2024-12-05T20:23:35.604268Z","steps":["trace[1626204079] 'read index received'  (duration: 274.284281ms)","trace[1626204079] 'applied index is now lower than readState.Index'  (duration: 166.598µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:23:35.604477Z","caller":"traceutil/trace.go:171","msg":"trace[198632349] transaction","detail":"{read_only:false; response_revision:1777; number_of_response:1; }","duration":"287.487357ms","start":"2024-12-05T20:23:35.316981Z","end":"2024-12-05T20:23:35.604469Z","steps":["trace[198632349] 'process raft request'  (duration: 287.134736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:35.604511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.58891ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:35.605289Z","caller":"traceutil/trace.go:171","msg":"trace[1270703048] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1777; }","duration":"110.358928ms","start":"2024-12-05T20:23:35.494899Z","end":"2024-12-05T20:23:35.605258Z","steps":["trace[1270703048] 'agreement among raft nodes before linearized reading'  (duration: 109.575966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:35.604555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.736987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-provisioner-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:35.605675Z","caller":"traceutil/trace.go:171","msg":"trace[1953490924] range","detail":"{range_begin:/registry/roles/kube-system/external-provisioner-cfg; range_end:; response_count:0; response_revision:1777; }","duration":"275.8544ms","start":"2024-12-05T20:23:35.329812Z","end":"2024-12-05T20:23:35.605666Z","steps":["trace[1953490924] 'agreement among raft nodes before linearized reading'  (duration: 274.72327ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:25:20 up 5 min,  0 users,  load average: 0.43, 0.84, 0.47
	Linux addons-523528 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c] <==
	 > logger="UnhandledError"
	E1205 20:22:19.345827       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.114.167:443: connect: connection refused" logger="UnhandledError"
	E1205 20:22:19.350957       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.114.167:443: connect: connection refused" logger="UnhandledError"
	I1205 20:22:19.413227       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 20:22:30.247025       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.110.72"}
	I1205 20:22:56.788276       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 20:22:56.984468       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.3.228"}
	I1205 20:23:00.470818       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 20:23:01.505369       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 20:23:16.495663       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 20:23:31.928712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.928772       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:31.969887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.969947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:31.984730       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.986954       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:32.025956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:32.026023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1205 20:23:32.839522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1205 20:23:32.969785       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E1205 20:23:32.983794       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1205 20:23:33.027573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 20:23:33.051517       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1205 20:23:33.058968       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I1205 20:25:18.988982       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.0.75"}
	
	
	==> kube-controller-manager [eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01] <==
	E1205 20:23:51.521382       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 20:23:52.104296       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I1205 20:23:52.104404       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 20:23:52.728641       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I1205 20:23:52.728748       1 shared_informer.go:320] Caches are synced for garbage collector
	W1205 20:24:07.569085       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:07.569259       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:11.504425       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:11.504476       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:13.142367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:13.142486       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:22.373346       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:22.373453       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:49.008274       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:49.008320       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:51.972457       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:51.972507       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:24:52.304344       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:24:52.304478       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 20:25:18.800979       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="36.873956ms"
	I1205 20:25:18.810935       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.905934ms"
	I1205 20:25:18.811015       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="31.666µs"
	I1205 20:25:18.815921       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="30.366µs"
	W1205 20:25:19.837314       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:25:19.837366       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:20:27.091850       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:20:27.236026       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E1205 20:20:27.236136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:20:27.339910       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:20:27.339957       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:20:27.339984       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:20:27.343597       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:20:27.343884       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:20:27.343917       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:20:27.345312       1 config.go:199] "Starting service config controller"
	I1205 20:20:27.345350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:20:27.345473       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:20:27.345501       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:20:27.348927       1 config.go:328] "Starting node config controller"
	I1205 20:20:27.348956       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:20:27.453807       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:20:27.453880       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:20:27.453903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f] <==
	W1205 20:20:15.073233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:15.073266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:15.920564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:20:15.920654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.011926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.011957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.024618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:20:16.024676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.033879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:20:16.033932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.098249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.098373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.158231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:20:16.158313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.279480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:20:16.279535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.333568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.333630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.431626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.431681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.446856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:20:16.447312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.509932       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:20:16.510031       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 20:20:18.662076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.280329    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430318279802278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.280409    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430318279802278,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:595908,},InodesUsed:&UInt64Value{Value:201,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790505    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="liveness-probe"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790550    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="aa2eda09-0153-4349-8efe-c65537dbe04d" containerName="volume-snapshot-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790559    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-provisioner"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790566    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="84675000-bf37-46d9-ab6a-6e1cb4781e25" containerName="csi-attacher"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790572    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f887431d-704e-424a-bd9f-3d74ed3aaca0" containerName="task-pv-container"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790578    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a18ef2ee-6053-4bee-a9e0-8ed83cc2e964" containerName="csi-resizer"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790584    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-snapshotter"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790590    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d" containerName="volume-snapshot-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790597    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-external-health-monitor-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790604    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="node-driver-registrar"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: E1205 20:25:18.790609    1204 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="hostpath"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790636    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-external-health-monitor-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790643    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="hostpath"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790649    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-snapshotter"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790654    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="f887431d-704e-424a-bd9f-3d74ed3aaca0" containerName="task-pv-container"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790658    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa2eda09-0153-4349-8efe-c65537dbe04d" containerName="volume-snapshot-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790663    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="liveness-probe"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790668    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="node-driver-registrar"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790673    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d" containerName="volume-snapshot-controller"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790677    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="84675000-bf37-46d9-ab6a-6e1cb4781e25" containerName="csi-attacher"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790683    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3e89b1e-6a83-4dd9-a487-29437e9207a2" containerName="csi-provisioner"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.790687    1204 memory_manager.go:354] "RemoveStaleState removing state" podUID="a18ef2ee-6053-4bee-a9e0-8ed83cc2e964" containerName="csi-resizer"
	Dec 05 20:25:18 addons-523528 kubelet[1204]: I1205 20:25:18.892958    1204 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgf8k\" (UniqueName: \"kubernetes.io/projected/f9eaf75f-7393-4e0d-82ea-086ee7529f08-kube-api-access-rgf8k\") pod \"hello-world-app-55bf9c44b4-h8h8x\" (UID: \"f9eaf75f-7393-4e0d-82ea-086ee7529f08\") " pod="default/hello-world-app-55bf9c44b4-h8h8x"
	
	
	==> storage-provisioner [15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676] <==
	I1205 20:20:30.791655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:20:30.810493       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:20:30.810550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:20:30.823276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:20:30.823490       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160!
	I1205 20:20:30.827816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d97f055a-5510-42e7-b263-b69c7caf62f3", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160 became leader
	I1205 20:20:30.926259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-523528 -n addons-523528
helpers_test.go:261: (dbg) Run:  kubectl --context addons-523528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: hello-world-app-55bf9c44b4-h8h8x ingress-nginx-admission-create-vm87g ingress-nginx-admission-patch-2gff4
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-523528 describe pod hello-world-app-55bf9c44b4-h8h8x ingress-nginx-admission-create-vm87g ingress-nginx-admission-patch-2gff4
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-523528 describe pod hello-world-app-55bf9c44b4-h8h8x ingress-nginx-admission-create-vm87g ingress-nginx-admission-patch-2gff4: exit status 1 (86.888647ms)

                                                
                                                
-- stdout --
	Name:             hello-world-app-55bf9c44b4-h8h8x
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-523528/192.168.39.217
	Start Time:       Thu, 05 Dec 2024 20:25:18 +0000
	Labels:           app=hello-world-app
	                  pod-template-hash=55bf9c44b4
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-world-app-55bf9c44b4
	Containers:
	  hello-world-app:
	    Container ID:   
	    Image:          docker.io/kicbase/echo-server:1.0
	    Image ID:       
	    Port:           8080/TCP
	    Host Port:      0/TCP
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rgf8k (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-rgf8k:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  3s    default-scheduler  Successfully assigned default/hello-world-app-55bf9c44b4-h8h8x to addons-523528
	  Normal  Pulling    2s    kubelet            Pulling image "docker.io/kicbase/echo-server:1.0"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-vm87g" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-2gff4" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-523528 describe pod hello-world-app-55bf9c44b4-h8h8x ingress-nginx-admission-create-vm87g ingress-nginx-admission-patch-2gff4: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable ingress-dns --alsologtostderr -v=1: (1.26360501s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable ingress --alsologtostderr -v=1: (7.738138289s)
--- FAIL: TestAddons/parallel/Ingress (153.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (302.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 3.228223ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-9sfj2" [4fb71d12-56fb-4616-bee4-29859c9f2a05] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00750377s
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (83.069927ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m11.534731657s

                                                
                                                
** /stderr **
I1205 20:22:35.537503  300765 retry.go:31] will retry after 3.651388604s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (70.373216ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m15.257656202s

                                                
                                                
** /stderr **
I1205 20:22:39.260285  300765 retry.go:31] will retry after 4.044384987s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (74.567252ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m19.37773433s

                                                
                                                
** /stderr **
I1205 20:22:43.380152  300765 retry.go:31] will retry after 6.845834323s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (73.329409ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m26.297556948s

                                                
                                                
** /stderr **
I1205 20:22:50.300184  300765 retry.go:31] will retry after 13.826868659s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (70.768223ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m40.196264977s

                                                
                                                
** /stderr **
I1205 20:23:04.199030  300765 retry.go:31] will retry after 17.930012004s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (74.994661ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 2m58.202457304s

                                                
                                                
** /stderr **
I1205 20:23:22.205375  300765 retry.go:31] will retry after 17.54758429s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (68.602927ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 3m15.819277228s

                                                
                                                
** /stderr **
I1205 20:23:39.822197  300765 retry.go:31] will retry after 25.60819242s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (69.520423ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 3m41.502510533s

                                                
                                                
** /stderr **
I1205 20:24:05.505026  300765 retry.go:31] will retry after 1m12.701446135s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (68.107ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 4m54.272263466s

                                                
                                                
** /stderr **
I1205 20:25:18.275398  300765 retry.go:31] will retry after 1m7.187128302s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (68.56004ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 6m1.533875893s

                                                
                                                
** /stderr **
I1205 20:26:25.536933  300765 retry.go:31] will retry after 1m3.723962704s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-523528 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-523528 top pods -n kube-system: exit status 1 (71.073464ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/amd-gpu-device-plugin-lqd4k, age: 7m5.32960814s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-523528 -n addons-523528
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 logs -n 25: (1.145143781s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-401320                                                                     | download-only-401320 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| delete  | -p download-only-565473                                                                     | download-only-565473 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-326413 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | binary-mirror-326413                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:39531                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-326413                                                                     | binary-mirror-326413 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| addons  | disable dashboard -p                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | addons-523528                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | addons-523528                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-523528 --wait=true                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                      |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:21 UTC | 05 Dec 24 20:22 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | -p addons-523528                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-523528 ip                                                                            | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | disable cloud-spanner                                                                       |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-523528 ssh cat                                                                       | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | /opt/local-path-provisioner/pvc-24f2de26-a653-44d0-af2f-07e5589c431c_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:22 UTC | 05 Dec 24 20:23 UTC |
	|         | disable inspektor-gadget                                                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-523528 ssh curl -s                                                                   | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-523528 addons                                                                        | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:23 UTC | 05 Dec 24 20:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-523528 ip                                                                            | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-523528 addons disable                                                                | addons-523528        | jenkins | v1.34.0 | 05 Dec 24 20:25 UTC | 05 Dec 24 20:25 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:19:33
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:19:33.246721  301384 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:19:33.246895  301384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:33.246911  301384 out.go:358] Setting ErrFile to fd 2...
	I1205 20:19:33.246920  301384 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:33.247530  301384 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:19:33.248344  301384 out.go:352] Setting JSON to false
	I1205 20:19:33.249295  301384 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10921,"bootTime":1733419052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:19:33.249416  301384 start.go:139] virtualization: kvm guest
	I1205 20:19:33.251428  301384 out.go:177] * [addons-523528] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:19:33.253108  301384 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:19:33.253112  301384 notify.go:220] Checking for updates...
	I1205 20:19:33.255698  301384 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:19:33.256948  301384 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:19:33.258343  301384 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:33.259719  301384 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:19:33.261114  301384 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:19:33.262778  301384 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:19:33.298479  301384 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:19:33.299905  301384 start.go:297] selected driver: kvm2
	I1205 20:19:33.299922  301384 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:19:33.299937  301384 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:19:33.300810  301384 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:33.300904  301384 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:19:33.317692  301384 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:19:33.319078  301384 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:19:33.319637  301384 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:19:33.319691  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:19:33.319957  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:19:33.319987  301384 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:19:33.320088  301384 start.go:340] cluster config:
	{Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:19:33.320252  301384 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:33.323007  301384 out.go:177] * Starting "addons-523528" primary control-plane node in "addons-523528" cluster
	I1205 20:19:33.324286  301384 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:19:33.324329  301384 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:33.324350  301384 cache.go:56] Caching tarball of preloaded images
	I1205 20:19:33.324452  301384 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:19:33.324464  301384 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:19:33.324776  301384 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json ...
	I1205 20:19:33.324803  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json: {Name:mkcf83816102e2d1597e39187ac57c2e822fd009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:19:33.324947  301384 start.go:360] acquireMachinesLock for addons-523528: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:19:33.324993  301384 start.go:364] duration metric: took 32.034µs to acquireMachinesLock for "addons-523528"
	I1205 20:19:33.325009  301384 start.go:93] Provisioning new machine with config: &{Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:19:33.325069  301384 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:19:33.326853  301384 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I1205 20:19:33.327013  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:19:33.327042  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:19:33.342837  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36197
	I1205 20:19:33.343441  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:19:33.344100  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:19:33.344124  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:19:33.344563  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:19:33.344796  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:19:33.344996  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:19:33.345197  301384 start.go:159] libmachine.API.Create for "addons-523528" (driver="kvm2")
	I1205 20:19:33.345228  301384 client.go:168] LocalClient.Create starting
	I1205 20:19:33.345277  301384 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:19:33.458356  301384 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:19:33.562438  301384 main.go:141] libmachine: Running pre-create checks...
	I1205 20:19:33.562468  301384 main.go:141] libmachine: (addons-523528) Calling .PreCreateCheck
	I1205 20:19:33.563032  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:19:33.563519  301384 main.go:141] libmachine: Creating machine...
	I1205 20:19:33.563535  301384 main.go:141] libmachine: (addons-523528) Calling .Create
	I1205 20:19:33.563705  301384 main.go:141] libmachine: (addons-523528) Creating KVM machine...
	I1205 20:19:33.565245  301384 main.go:141] libmachine: (addons-523528) DBG | found existing default KVM network
	I1205 20:19:33.566208  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.566011  301406 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015cc0}
	I1205 20:19:33.566244  301384 main.go:141] libmachine: (addons-523528) DBG | created network xml: 
	I1205 20:19:33.566263  301384 main.go:141] libmachine: (addons-523528) DBG | <network>
	I1205 20:19:33.566272  301384 main.go:141] libmachine: (addons-523528) DBG |   <name>mk-addons-523528</name>
	I1205 20:19:33.566281  301384 main.go:141] libmachine: (addons-523528) DBG |   <dns enable='no'/>
	I1205 20:19:33.566287  301384 main.go:141] libmachine: (addons-523528) DBG |   
	I1205 20:19:33.566298  301384 main.go:141] libmachine: (addons-523528) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:19:33.566311  301384 main.go:141] libmachine: (addons-523528) DBG |     <dhcp>
	I1205 20:19:33.566321  301384 main.go:141] libmachine: (addons-523528) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:19:33.566329  301384 main.go:141] libmachine: (addons-523528) DBG |     </dhcp>
	I1205 20:19:33.566343  301384 main.go:141] libmachine: (addons-523528) DBG |   </ip>
	I1205 20:19:33.566356  301384 main.go:141] libmachine: (addons-523528) DBG |   
	I1205 20:19:33.566393  301384 main.go:141] libmachine: (addons-523528) DBG | </network>
	I1205 20:19:33.566417  301384 main.go:141] libmachine: (addons-523528) DBG | 
	I1205 20:19:33.571829  301384 main.go:141] libmachine: (addons-523528) DBG | trying to create private KVM network mk-addons-523528 192.168.39.0/24...
	I1205 20:19:33.642502  301384 main.go:141] libmachine: (addons-523528) DBG | private KVM network mk-addons-523528 192.168.39.0/24 created
	I1205 20:19:33.642591  301384 main.go:141] libmachine: (addons-523528) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 ...
	I1205 20:19:33.642622  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.642471  301406 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:33.642673  301384 main.go:141] libmachine: (addons-523528) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:19:33.642704  301384 main.go:141] libmachine: (addons-523528) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:19:33.934614  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:33.934435  301406 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa...
	I1205 20:19:34.038956  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:34.038770  301406 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/addons-523528.rawdisk...
	I1205 20:19:34.038990  301384 main.go:141] libmachine: (addons-523528) DBG | Writing magic tar header
	I1205 20:19:34.039002  301384 main.go:141] libmachine: (addons-523528) DBG | Writing SSH key tar header
	I1205 20:19:34.039010  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:34.038905  301406 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 ...
	I1205 20:19:34.039024  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528
	I1205 20:19:34.039095  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528 (perms=drwx------)
	I1205 20:19:34.039137  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:19:34.039145  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:19:34.039170  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:19:34.039183  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:19:34.039192  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:34.039206  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:19:34.039214  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:19:34.039223  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:19:34.039231  301384 main.go:141] libmachine: (addons-523528) DBG | Checking permissions on dir: /home
	I1205 20:19:34.039238  301384 main.go:141] libmachine: (addons-523528) DBG | Skipping /home - not owner
	I1205 20:19:34.039245  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:19:34.039250  301384 main.go:141] libmachine: (addons-523528) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:19:34.039258  301384 main.go:141] libmachine: (addons-523528) Creating domain...
	I1205 20:19:34.040571  301384 main.go:141] libmachine: (addons-523528) define libvirt domain using xml: 
	I1205 20:19:34.040618  301384 main.go:141] libmachine: (addons-523528) <domain type='kvm'>
	I1205 20:19:34.040627  301384 main.go:141] libmachine: (addons-523528)   <name>addons-523528</name>
	I1205 20:19:34.040632  301384 main.go:141] libmachine: (addons-523528)   <memory unit='MiB'>4000</memory>
	I1205 20:19:34.040638  301384 main.go:141] libmachine: (addons-523528)   <vcpu>2</vcpu>
	I1205 20:19:34.040647  301384 main.go:141] libmachine: (addons-523528)   <features>
	I1205 20:19:34.040653  301384 main.go:141] libmachine: (addons-523528)     <acpi/>
	I1205 20:19:34.040660  301384 main.go:141] libmachine: (addons-523528)     <apic/>
	I1205 20:19:34.040666  301384 main.go:141] libmachine: (addons-523528)     <pae/>
	I1205 20:19:34.040670  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040676  301384 main.go:141] libmachine: (addons-523528)   </features>
	I1205 20:19:34.040682  301384 main.go:141] libmachine: (addons-523528)   <cpu mode='host-passthrough'>
	I1205 20:19:34.040687  301384 main.go:141] libmachine: (addons-523528)   
	I1205 20:19:34.040698  301384 main.go:141] libmachine: (addons-523528)   </cpu>
	I1205 20:19:34.040704  301384 main.go:141] libmachine: (addons-523528)   <os>
	I1205 20:19:34.040711  301384 main.go:141] libmachine: (addons-523528)     <type>hvm</type>
	I1205 20:19:34.040721  301384 main.go:141] libmachine: (addons-523528)     <boot dev='cdrom'/>
	I1205 20:19:34.040729  301384 main.go:141] libmachine: (addons-523528)     <boot dev='hd'/>
	I1205 20:19:34.040735  301384 main.go:141] libmachine: (addons-523528)     <bootmenu enable='no'/>
	I1205 20:19:34.040742  301384 main.go:141] libmachine: (addons-523528)   </os>
	I1205 20:19:34.040748  301384 main.go:141] libmachine: (addons-523528)   <devices>
	I1205 20:19:34.040757  301384 main.go:141] libmachine: (addons-523528)     <disk type='file' device='cdrom'>
	I1205 20:19:34.040767  301384 main.go:141] libmachine: (addons-523528)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/boot2docker.iso'/>
	I1205 20:19:34.040779  301384 main.go:141] libmachine: (addons-523528)       <target dev='hdc' bus='scsi'/>
	I1205 20:19:34.040785  301384 main.go:141] libmachine: (addons-523528)       <readonly/>
	I1205 20:19:34.040794  301384 main.go:141] libmachine: (addons-523528)     </disk>
	I1205 20:19:34.040801  301384 main.go:141] libmachine: (addons-523528)     <disk type='file' device='disk'>
	I1205 20:19:34.040808  301384 main.go:141] libmachine: (addons-523528)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:19:34.040816  301384 main.go:141] libmachine: (addons-523528)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/addons-523528.rawdisk'/>
	I1205 20:19:34.040822  301384 main.go:141] libmachine: (addons-523528)       <target dev='hda' bus='virtio'/>
	I1205 20:19:34.040827  301384 main.go:141] libmachine: (addons-523528)     </disk>
	I1205 20:19:34.040832  301384 main.go:141] libmachine: (addons-523528)     <interface type='network'>
	I1205 20:19:34.040839  301384 main.go:141] libmachine: (addons-523528)       <source network='mk-addons-523528'/>
	I1205 20:19:34.040847  301384 main.go:141] libmachine: (addons-523528)       <model type='virtio'/>
	I1205 20:19:34.040852  301384 main.go:141] libmachine: (addons-523528)     </interface>
	I1205 20:19:34.040857  301384 main.go:141] libmachine: (addons-523528)     <interface type='network'>
	I1205 20:19:34.040864  301384 main.go:141] libmachine: (addons-523528)       <source network='default'/>
	I1205 20:19:34.040869  301384 main.go:141] libmachine: (addons-523528)       <model type='virtio'/>
	I1205 20:19:34.040877  301384 main.go:141] libmachine: (addons-523528)     </interface>
	I1205 20:19:34.040882  301384 main.go:141] libmachine: (addons-523528)     <serial type='pty'>
	I1205 20:19:34.040888  301384 main.go:141] libmachine: (addons-523528)       <target port='0'/>
	I1205 20:19:34.040893  301384 main.go:141] libmachine: (addons-523528)     </serial>
	I1205 20:19:34.040934  301384 main.go:141] libmachine: (addons-523528)     <console type='pty'>
	I1205 20:19:34.040952  301384 main.go:141] libmachine: (addons-523528)       <target type='serial' port='0'/>
	I1205 20:19:34.040959  301384 main.go:141] libmachine: (addons-523528)     </console>
	I1205 20:19:34.040965  301384 main.go:141] libmachine: (addons-523528)     <rng model='virtio'>
	I1205 20:19:34.040973  301384 main.go:141] libmachine: (addons-523528)       <backend model='random'>/dev/random</backend>
	I1205 20:19:34.040978  301384 main.go:141] libmachine: (addons-523528)     </rng>
	I1205 20:19:34.040984  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040993  301384 main.go:141] libmachine: (addons-523528)     
	I1205 20:19:34.040998  301384 main.go:141] libmachine: (addons-523528)   </devices>
	I1205 20:19:34.041003  301384 main.go:141] libmachine: (addons-523528) </domain>
	I1205 20:19:34.041011  301384 main.go:141] libmachine: (addons-523528) 
	I1205 20:19:34.045612  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:0a:84:86 in network default
	I1205 20:19:34.046248  301384 main.go:141] libmachine: (addons-523528) Ensuring networks are active...
	I1205 20:19:34.046297  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:34.046991  301384 main.go:141] libmachine: (addons-523528) Ensuring network default is active
	I1205 20:19:34.047304  301384 main.go:141] libmachine: (addons-523528) Ensuring network mk-addons-523528 is active
	I1205 20:19:34.047857  301384 main.go:141] libmachine: (addons-523528) Getting domain xml...
	I1205 20:19:34.048710  301384 main.go:141] libmachine: (addons-523528) Creating domain...
	I1205 20:19:35.303598  301384 main.go:141] libmachine: (addons-523528) Waiting to get IP...
	I1205 20:19:35.304435  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.304927  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.304979  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.304915  301406 retry.go:31] will retry after 288.523272ms: waiting for machine to come up
	I1205 20:19:35.595694  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.596190  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.596226  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.596142  301406 retry.go:31] will retry after 260.471732ms: waiting for machine to come up
	I1205 20:19:35.858781  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:35.859323  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:35.859357  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:35.859261  301406 retry.go:31] will retry after 407.556596ms: waiting for machine to come up
	I1205 20:19:36.269223  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:36.269706  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:36.269731  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:36.269668  301406 retry.go:31] will retry after 375.887724ms: waiting for machine to come up
	I1205 20:19:36.647392  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:36.647749  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:36.647781  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:36.647693  301406 retry.go:31] will retry after 684.620456ms: waiting for machine to come up
	I1205 20:19:37.333667  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:37.334176  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:37.334201  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:37.334125  301406 retry.go:31] will retry after 925.442052ms: waiting for machine to come up
	I1205 20:19:38.261294  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:38.261731  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:38.261759  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:38.261690  301406 retry.go:31] will retry after 1.016520828s: waiting for machine to come up
	I1205 20:19:39.279596  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:39.280130  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:39.280166  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:39.280075  301406 retry.go:31] will retry after 1.34038701s: waiting for machine to come up
	I1205 20:19:40.623073  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:40.623631  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:40.623665  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:40.623550  301406 retry.go:31] will retry after 1.472535213s: waiting for machine to come up
	I1205 20:19:42.098424  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:42.098929  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:42.098959  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:42.098881  301406 retry.go:31] will retry after 1.790209374s: waiting for machine to come up
	I1205 20:19:43.891291  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:43.891867  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:43.891900  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:43.891804  301406 retry.go:31] will retry after 2.201804102s: waiting for machine to come up
	I1205 20:19:46.096364  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:46.096908  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:46.096933  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:46.096849  301406 retry.go:31] will retry after 2.743938954s: waiting for machine to come up
	I1205 20:19:48.842851  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:48.844025  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:48.844054  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:48.843955  301406 retry.go:31] will retry after 3.796103066s: waiting for machine to come up
	I1205 20:19:52.644983  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:52.645362  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find current IP address of domain addons-523528 in network mk-addons-523528
	I1205 20:19:52.645388  301384 main.go:141] libmachine: (addons-523528) DBG | I1205 20:19:52.645313  301406 retry.go:31] will retry after 4.704422991s: waiting for machine to come up
	I1205 20:19:57.354576  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.355137  301384 main.go:141] libmachine: (addons-523528) Found IP for machine: 192.168.39.217
	I1205 20:19:57.355160  301384 main.go:141] libmachine: (addons-523528) Reserving static IP address...
	I1205 20:19:57.355168  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has current primary IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.355501  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find host DHCP lease matching {name: "addons-523528", mac: "52:54:00:94:d3:2c", ip: "192.168.39.217"} in network mk-addons-523528
	I1205 20:19:57.447752  301384 main.go:141] libmachine: (addons-523528) DBG | Getting to WaitForSSH function...
	I1205 20:19:57.447801  301384 main.go:141] libmachine: (addons-523528) Reserved static IP address: 192.168.39.217
	I1205 20:19:57.447815  301384 main.go:141] libmachine: (addons-523528) Waiting for SSH to be available...
	I1205 20:19:57.450644  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:19:57.451038  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528
	I1205 20:19:57.451069  301384 main.go:141] libmachine: (addons-523528) DBG | unable to find defined IP address of network mk-addons-523528 interface with MAC address 52:54:00:94:d3:2c
	I1205 20:19:57.451181  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH client type: external
	I1205 20:19:57.451217  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa (-rw-------)
	I1205 20:19:57.451251  301384 main.go:141] libmachine: (addons-523528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:19:57.451274  301384 main.go:141] libmachine: (addons-523528) DBG | About to run SSH command:
	I1205 20:19:57.451294  301384 main.go:141] libmachine: (addons-523528) DBG | exit 0
	I1205 20:19:57.455367  301384 main.go:141] libmachine: (addons-523528) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:19:57.455392  301384 main.go:141] libmachine: (addons-523528) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:19:57.455399  301384 main.go:141] libmachine: (addons-523528) DBG | command : exit 0
	I1205 20:19:57.455404  301384 main.go:141] libmachine: (addons-523528) DBG | err     : exit status 255
	I1205 20:19:57.455412  301384 main.go:141] libmachine: (addons-523528) DBG | output  : 
	I1205 20:20:00.457296  301384 main.go:141] libmachine: (addons-523528) DBG | Getting to WaitForSSH function...
	I1205 20:20:00.460357  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.460919  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.460956  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.461160  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH client type: external
	I1205 20:20:00.461181  301384 main.go:141] libmachine: (addons-523528) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa (-rw-------)
	I1205 20:20:00.461259  301384 main.go:141] libmachine: (addons-523528) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:20:00.461294  301384 main.go:141] libmachine: (addons-523528) DBG | About to run SSH command:
	I1205 20:20:00.461344  301384 main.go:141] libmachine: (addons-523528) DBG | exit 0
	I1205 20:20:00.590288  301384 main.go:141] libmachine: (addons-523528) DBG | SSH cmd err, output: <nil>: 
	I1205 20:20:00.590641  301384 main.go:141] libmachine: (addons-523528) KVM machine creation complete!
	I1205 20:20:00.590987  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:20:00.596577  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:00.596924  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:00.597140  301384 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:20:00.597159  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:00.598613  301384 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:20:00.598633  301384 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:20:00.598641  301384 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:20:00.598650  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.601112  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.601483  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.601512  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.601677  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.601857  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.602010  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.602165  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.602329  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.602550  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.602560  301384 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:20:00.713436  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:20:00.713464  301384 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:20:00.713474  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.716766  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.717245  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.717285  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.717501  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.717762  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.717964  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.718114  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.718258  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.718481  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.718496  301384 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:20:00.834874  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:20:00.834951  301384 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:20:00.834958  301384 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:20:00.834967  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:00.835251  301384 buildroot.go:166] provisioning hostname "addons-523528"
	I1205 20:20:00.835286  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:00.835528  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.838610  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.839001  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.839036  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.839175  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.839388  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.839575  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.839753  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.839893  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.840077  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.840096  301384 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-523528 && echo "addons-523528" | sudo tee /etc/hostname
	I1205 20:20:00.967639  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-523528
	
	I1205 20:20:00.967677  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:00.970568  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.970855  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:00.970885  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:00.971093  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:00.971338  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.971573  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:00.971714  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:00.971873  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:00.972117  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:00.972136  301384 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-523528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-523528/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-523528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:20:01.094575  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:20:01.094613  301384 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:20:01.094673  301384 buildroot.go:174] setting up certificates
	I1205 20:20:01.094710  301384 provision.go:84] configureAuth start
	I1205 20:20:01.094730  301384 main.go:141] libmachine: (addons-523528) Calling .GetMachineName
	I1205 20:20:01.095100  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:01.098450  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.098923  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.098949  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.099211  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.101814  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.102196  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.102230  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.102382  301384 provision.go:143] copyHostCerts
	I1205 20:20:01.102490  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:20:01.102645  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:20:01.102736  301384 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:20:01.102820  301384 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.addons-523528 san=[127.0.0.1 192.168.39.217 addons-523528 localhost minikube]
	I1205 20:20:01.454061  301384 provision.go:177] copyRemoteCerts
	I1205 20:20:01.454146  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:20:01.454177  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.456887  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.457283  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.457324  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.457495  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.457770  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.458019  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.458249  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:01.544283  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:20:01.568008  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:20:01.591863  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:20:01.615378  301384 provision.go:87] duration metric: took 520.644427ms to configureAuth
	I1205 20:20:01.615420  301384 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:20:01.615616  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:01.615707  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.618585  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.619019  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.619056  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.619206  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.619439  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.619601  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.619751  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.619955  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:01.620152  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:01.620180  301384 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:20:01.848966  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:20:01.849000  301384 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:20:01.849011  301384 main.go:141] libmachine: (addons-523528) Calling .GetURL
	I1205 20:20:01.850481  301384 main.go:141] libmachine: (addons-523528) DBG | Using libvirt version 6000000
	I1205 20:20:01.853191  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.853510  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.853549  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.853715  301384 main.go:141] libmachine: Docker is up and running!
	I1205 20:20:01.853729  301384 main.go:141] libmachine: Reticulating splines...
	I1205 20:20:01.853739  301384 client.go:171] duration metric: took 28.508501597s to LocalClient.Create
	I1205 20:20:01.853772  301384 start.go:167] duration metric: took 28.508575657s to libmachine.API.Create "addons-523528"
	I1205 20:20:01.853784  301384 start.go:293] postStartSetup for "addons-523528" (driver="kvm2")
	I1205 20:20:01.853795  301384 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:20:01.853813  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:01.854094  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:20:01.854122  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.856663  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.857087  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.857120  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.857358  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.857579  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.857758  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.857894  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:01.944604  301384 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:20:01.949011  301384 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:20:01.949060  301384 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:20:01.949150  301384 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:20:01.949180  301384 start.go:296] duration metric: took 95.390319ms for postStartSetup
	I1205 20:20:01.949221  301384 main.go:141] libmachine: (addons-523528) Calling .GetConfigRaw
	I1205 20:20:01.949963  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:01.952800  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.953152  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.953180  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.953443  301384 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/config.json ...
	I1205 20:20:01.953714  301384 start.go:128] duration metric: took 28.628629517s to createHost
	I1205 20:20:01.953760  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:01.956393  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.956782  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:01.956817  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:01.957005  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:01.957255  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.957403  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:01.957540  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:01.957694  301384 main.go:141] libmachine: Using SSH client type: native
	I1205 20:20:01.957861  301384 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I1205 20:20:01.957879  301384 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:20:02.070771  301384 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430002.051669102
	
	I1205 20:20:02.070800  301384 fix.go:216] guest clock: 1733430002.051669102
	I1205 20:20:02.070811  301384 fix.go:229] Guest: 2024-12-05 20:20:02.051669102 +0000 UTC Remote: 2024-12-05 20:20:01.953734015 +0000 UTC m=+28.748670592 (delta=97.935087ms)
	I1205 20:20:02.070847  301384 fix.go:200] guest clock delta is within tolerance: 97.935087ms
	I1205 20:20:02.070855  301384 start.go:83] releasing machines lock for "addons-523528", held for 28.745853584s
	I1205 20:20:02.070890  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.071210  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:02.074491  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.074950  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.074989  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.075093  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.075698  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.075894  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:02.076014  301384 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:20:02.076077  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:02.076164  301384 ssh_runner.go:195] Run: cat /version.json
	I1205 20:20:02.076195  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:02.078877  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079281  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.079311  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079430  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079459  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:02.079735  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:02.079837  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:02.079862  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:02.079942  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:02.080064  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:02.080177  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:02.080212  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:02.080349  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:02.080506  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:02.158917  301384 ssh_runner.go:195] Run: systemctl --version
	I1205 20:20:02.184609  301384 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:20:02.347283  301384 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:20:02.353263  301384 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:20:02.353365  301384 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:20:02.370200  301384 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:20:02.370235  301384 start.go:495] detecting cgroup driver to use...
	I1205 20:20:02.370324  301384 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:20:02.386928  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:20:02.401423  301384 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:20:02.401496  301384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:20:02.416460  301384 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:20:02.430813  301384 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:20:02.546439  301384 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:20:02.682728  301384 docker.go:233] disabling docker service ...
	I1205 20:20:02.682808  301384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:20:02.697612  301384 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:20:02.712529  301384 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:20:02.860566  301384 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:20:02.978121  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:20:02.992020  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:20:03.011713  301384 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:20:03.011796  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.022367  301384 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:20:03.022467  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.032974  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.043656  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.054139  301384 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:20:03.065871  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.076197  301384 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.093547  301384 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:20:03.103722  301384 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:20:03.112881  301384 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:20:03.112993  301384 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:20:03.125464  301384 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:20:03.134972  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:03.249798  301384 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:20:03.335041  301384 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:20:03.335152  301384 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:20:03.339814  301384 start.go:563] Will wait 60s for crictl version
	I1205 20:20:03.339902  301384 ssh_runner.go:195] Run: which crictl
	I1205 20:20:03.343676  301384 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:20:03.380554  301384 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:20:03.380672  301384 ssh_runner.go:195] Run: crio --version
	I1205 20:20:03.409215  301384 ssh_runner.go:195] Run: crio --version
	I1205 20:20:03.439015  301384 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:20:03.440399  301384 main.go:141] libmachine: (addons-523528) Calling .GetIP
	I1205 20:20:03.443263  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:03.443531  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:03.443562  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:03.443822  301384 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:20:03.447905  301384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:20:03.460388  301384 kubeadm.go:883] updating cluster {Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:20:03.460561  301384 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:20:03.460624  301384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:20:03.491301  301384 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:20:03.491386  301384 ssh_runner.go:195] Run: which lz4
	I1205 20:20:03.495365  301384 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:20:03.499531  301384 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:20:03.499577  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:20:04.766466  301384 crio.go:462] duration metric: took 1.271160326s to copy over tarball
	I1205 20:20:04.766554  301384 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:20:06.930598  301384 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16400808s)
	I1205 20:20:06.930634  301384 crio.go:469] duration metric: took 2.164129344s to extract the tarball
	I1205 20:20:06.930645  301384 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:20:06.967895  301384 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:20:07.013541  301384 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:20:07.013570  301384 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:20:07.013580  301384 kubeadm.go:934] updating node { 192.168.39.217 8443 v1.31.2 crio true true} ...
	I1205 20:20:07.013702  301384 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-523528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:20:07.013781  301384 ssh_runner.go:195] Run: crio config
	I1205 20:20:07.060987  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:20:07.061014  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:20:07.061029  301384 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:20:07.061054  301384 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-523528 NodeName:addons-523528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:20:07.061225  301384 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-523528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.217"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:20:07.061299  301384 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:20:07.071252  301384 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:20:07.071347  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 20:20:07.081004  301384 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:20:07.098213  301384 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:20:07.115425  301384 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1205 20:20:07.133878  301384 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I1205 20:20:07.137805  301384 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:20:07.150232  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:07.263153  301384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:20:07.279871  301384 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528 for IP: 192.168.39.217
	I1205 20:20:07.279909  301384 certs.go:194] generating shared ca certs ...
	I1205 20:20:07.279937  301384 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.280135  301384 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:20:07.395635  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt ...
	I1205 20:20:07.395667  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt: {Name:mk598ca4d7b2f2ba8ce81c3c8132e48b13537f78 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.395862  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key ...
	I1205 20:20:07.395878  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key: {Name:mk83416aa7315c4e40f0f1eeff10d00de09bd0de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.395960  301384 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:20:07.654379  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt ...
	I1205 20:20:07.654416  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt: {Name:mkc5cdb7edc0f3ac1fb912d4d8803c8e80c04ebd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.654603  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key ...
	I1205 20:20:07.654616  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key: {Name:mkce7d3edd856411f5d2ba3b813e7b9cfd75334b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.654691  301384 certs.go:256] generating profile certs ...
	I1205 20:20:07.654757  301384 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key
	I1205 20:20:07.654774  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt with IP's: []
	I1205 20:20:07.727531  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt ...
	I1205 20:20:07.727565  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: {Name:mkcdb6845fb061759df75a93736df390b88fb800 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.727745  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key ...
	I1205 20:20:07.727758  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.key: {Name:mk2e69a255c11c2698ff991f2164ea3226b1f8b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.727827  301384 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a
	I1205 20:20:07.727843  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.217]
	I1205 20:20:07.857159  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a ...
	I1205 20:20:07.857204  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a: {Name:mkfba0a63780c9d7cc7f608e78abcaa750f1d22f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.857418  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a ...
	I1205 20:20:07.857435  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a: {Name:mkcbe7ab3675156acb84d5a1d625e8d5861e03bb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:07.857521  301384 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt.5f22347a -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt
	I1205 20:20:07.857608  301384 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key.5f22347a -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key
	I1205 20:20:07.857666  301384 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key
	I1205 20:20:07.857688  301384 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt with IP's: []
	I1205 20:20:08.186597  301384 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt ...
	I1205 20:20:08.186647  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt: {Name:mk528f606870a34a4fe0369fd12aef887f8e944e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:08.186918  301384 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key ...
	I1205 20:20:08.186945  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key: {Name:mk8021fb4afe372e79599f2055cd1222512cfb7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:08.187260  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:20:08.187331  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:20:08.187382  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:20:08.187429  301384 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:20:08.188452  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:20:08.220004  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:20:08.251359  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:20:08.279067  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:20:08.303021  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 20:20:08.331083  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:20:08.355025  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:20:08.379971  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 20:20:08.405445  301384 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:20:08.429467  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:20:08.446144  301384 ssh_runner.go:195] Run: openssl version
	I1205 20:20:08.453301  301384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:20:08.465812  301384 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.470188  301384 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.470257  301384 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:20:08.475886  301384 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:20:08.487068  301384 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:20:08.491434  301384 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:20:08.491495  301384 kubeadm.go:392] StartCluster: {Name:addons-523528 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:addons-523528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:20:08.491585  301384 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:20:08.491667  301384 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:20:08.525643  301384 cri.go:89] found id: ""
	I1205 20:20:08.525750  301384 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:20:08.536676  301384 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:20:08.546447  301384 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:20:08.556866  301384 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:20:08.556892  301384 kubeadm.go:157] found existing configuration files:
	
	I1205 20:20:08.556944  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:20:08.566009  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:20:08.566082  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:20:08.575970  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:20:08.585013  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:20:08.585103  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:20:08.594776  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:20:08.604052  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:20:08.604134  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:20:08.613849  301384 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:20:08.623651  301384 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:20:08.623716  301384 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:20:08.633714  301384 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:20:08.679513  301384 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:20:08.679591  301384 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:20:08.772925  301384 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:20:08.773040  301384 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:20:08.773130  301384 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:20:08.783233  301384 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:20:08.870466  301384 out.go:235]   - Generating certificates and keys ...
	I1205 20:20:08.870629  301384 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:20:08.870720  301384 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:20:09.152354  301384 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:20:09.358827  301384 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:20:09.572333  301384 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:20:09.858133  301384 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:20:09.972048  301384 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:20:09.972188  301384 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-523528 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I1205 20:20:10.150273  301384 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:20:10.150475  301384 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-523528 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I1205 20:20:10.297718  301384 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:20:10.461418  301384 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:20:10.702437  301384 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:20:10.702552  301384 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:20:10.935886  301384 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:20:11.026397  301384 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:20:11.105329  301384 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:20:11.199772  301384 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:20:11.470328  301384 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:20:11.470791  301384 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:20:11.473121  301384 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:20:11.476103  301384 out.go:235]   - Booting up control plane ...
	I1205 20:20:11.476237  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:20:11.476331  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:20:11.476420  301384 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:20:11.491125  301384 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:20:11.498885  301384 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:20:11.498941  301384 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:20:11.631538  301384 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:20:11.631702  301384 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:20:12.133170  301384 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.156057ms
	I1205 20:20:12.133266  301384 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:20:17.133570  301384 kubeadm.go:310] [api-check] The API server is healthy after 5.002595818s
	I1205 20:20:17.146482  301384 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:20:17.166198  301384 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:20:17.198097  301384 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:20:17.198388  301384 kubeadm.go:310] [mark-control-plane] Marking the node addons-523528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:20:17.211782  301384 kubeadm.go:310] [bootstrap-token] Using token: uojl9c.ccr1m56n9aagwo8s
	I1205 20:20:17.213476  301384 out.go:235]   - Configuring RBAC rules ...
	I1205 20:20:17.213649  301384 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:20:17.224888  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:20:17.236418  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:20:17.239866  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:20:17.243989  301384 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:20:17.248377  301384 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:20:17.541238  301384 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:20:17.976752  301384 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:20:18.540875  301384 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:20:18.540908  301384 kubeadm.go:310] 
	I1205 20:20:18.541001  301384 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:20:18.541011  301384 kubeadm.go:310] 
	I1205 20:20:18.541147  301384 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:20:18.541161  301384 kubeadm.go:310] 
	I1205 20:20:18.541194  301384 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:20:18.541305  301384 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:20:18.541401  301384 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:20:18.541415  301384 kubeadm.go:310] 
	I1205 20:20:18.541487  301384 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:20:18.541495  301384 kubeadm.go:310] 
	I1205 20:20:18.541572  301384 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:20:18.541583  301384 kubeadm.go:310] 
	I1205 20:20:18.541666  301384 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:20:18.541775  301384 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:20:18.541866  301384 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:20:18.541875  301384 kubeadm.go:310] 
	I1205 20:20:18.542003  301384 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:20:18.542118  301384 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:20:18.542136  301384 kubeadm.go:310] 
	I1205 20:20:18.542244  301384 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token uojl9c.ccr1m56n9aagwo8s \
	I1205 20:20:18.542375  301384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:20:18.542418  301384 kubeadm.go:310] 	--control-plane 
	I1205 20:20:18.542434  301384 kubeadm.go:310] 
	I1205 20:20:18.542539  301384 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:20:18.542552  301384 kubeadm.go:310] 
	I1205 20:20:18.542658  301384 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token uojl9c.ccr1m56n9aagwo8s \
	I1205 20:20:18.542780  301384 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:20:18.543064  301384 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:20:18.543110  301384 cni.go:84] Creating CNI manager for ""
	I1205 20:20:18.543123  301384 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:20:18.544734  301384 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 20:20:18.546375  301384 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 20:20:18.558851  301384 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 20:20:18.579242  301384 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:20:18.579314  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:18.579314  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-523528 minikube.k8s.io/updated_at=2024_12_05T20_20_18_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=addons-523528 minikube.k8s.io/primary=true
	I1205 20:20:18.611554  301384 ops.go:34] apiserver oom_adj: -16
	I1205 20:20:18.702225  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:19.202392  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:19.703359  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:20.202551  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:20.703209  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:21.203095  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:21.703342  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.202519  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.702733  301384 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:20:22.814614  301384 kubeadm.go:1113] duration metric: took 4.235374312s to wait for elevateKubeSystemPrivileges
	I1205 20:20:22.814672  301384 kubeadm.go:394] duration metric: took 14.323182287s to StartCluster
	I1205 20:20:22.814698  301384 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:22.814852  301384 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:20:22.815376  301384 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:20:22.815614  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:20:22.815666  301384 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:20:22.815725  301384 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1205 20:20:22.815870  301384 addons.go:69] Setting yakd=true in profile "addons-523528"
	I1205 20:20:22.815886  301384 addons.go:69] Setting inspektor-gadget=true in profile "addons-523528"
	I1205 20:20:22.815910  301384 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-523528"
	I1205 20:20:22.815912  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:22.815927  301384 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-523528"
	I1205 20:20:22.815943  301384 addons.go:69] Setting metrics-server=true in profile "addons-523528"
	I1205 20:20:22.815955  301384 addons.go:69] Setting cloud-spanner=true in profile "addons-523528"
	I1205 20:20:22.815956  301384 addons.go:69] Setting volumesnapshots=true in profile "addons-523528"
	I1205 20:20:22.815956  301384 addons.go:69] Setting volcano=true in profile "addons-523528"
	I1205 20:20:22.815967  301384 addons.go:234] Setting addon cloud-spanner=true in "addons-523528"
	I1205 20:20:22.815969  301384 addons.go:69] Setting gcp-auth=true in profile "addons-523528"
	I1205 20:20:22.815971  301384 addons.go:234] Setting addon volumesnapshots=true in "addons-523528"
	I1205 20:20:22.815977  301384 addons.go:234] Setting addon volcano=true in "addons-523528"
	I1205 20:20:22.815988  301384 mustload.go:65] Loading cluster: addons-523528
	I1205 20:20:22.815993  301384 addons.go:69] Setting ingress-dns=true in profile "addons-523528"
	I1205 20:20:22.816003  301384 addons.go:234] Setting addon ingress-dns=true in "addons-523528"
	I1205 20:20:22.816010  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816018  301384 addons.go:69] Setting default-storageclass=true in profile "addons-523528"
	I1205 20:20:22.816038  301384 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-523528"
	I1205 20:20:22.816048  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.815918  301384 addons.go:234] Setting addon inspektor-gadget=true in "addons-523528"
	I1205 20:20:22.816158  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816188  301384 config.go:182] Loaded profile config "addons-523528": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:20:22.815931  301384 addons.go:69] Setting registry=true in profile "addons-523528"
	I1205 20:20:22.816245  301384 addons.go:234] Setting addon registry=true in "addons-523528"
	I1205 20:20:22.816288  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816530  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816552  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816576  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816575  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816578  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816613  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816624  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815981  301384 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-523528"
	I1205 20:20:22.816661  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.816697  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816707  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816019  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.816697  301384 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-523528"
	I1205 20:20:22.816933  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817110  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817187  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.816010  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817327  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817363  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815946  301384 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-523528"
	I1205 20:20:22.817606  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.817747  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.817782  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.818008  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818041  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815898  301384 addons.go:234] Setting addon yakd=true in "addons-523528"
	I1205 20:20:22.818174  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.815914  301384 addons.go:69] Setting storage-provisioner=true in profile "addons-523528"
	I1205 20:20:22.818426  301384 addons.go:234] Setting addon storage-provisioner=true in "addons-523528"
	I1205 20:20:22.818462  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.818521  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818583  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815932  301384 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-523528"
	I1205 20:20:22.818815  301384 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-523528"
	I1205 20:20:22.818848  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.818853  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.818886  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.815947  301384 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-523528"
	I1205 20:20:22.815962  301384 addons.go:234] Setting addon metrics-server=true in "addons-523528"
	I1205 20:20:22.819464  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.819520  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.819555  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.819672  301384 out.go:177] * Verifying Kubernetes components...
	I1205 20:20:22.815971  301384 addons.go:69] Setting ingress=true in profile "addons-523528"
	I1205 20:20:22.819861  301384 addons.go:234] Setting addon ingress=true in "addons-523528"
	I1205 20:20:22.819914  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.821406  301384 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:20:22.816653  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.821553  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.839165  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39241
	I1205 20:20:22.839237  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34873
	I1205 20:20:22.839807  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.839895  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.840418  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.840439  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.840523  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.840540  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.840982  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.841631  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.841681  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.845241  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43389
	I1205 20:20:22.845318  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35685
	I1205 20:20:22.845355  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.845406  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36573
	I1205 20:20:22.845620  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.847738  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.852654  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33317
	I1205 20:20:22.854304  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40501
	I1205 20:20:22.854431  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.854484  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855142  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855195  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855257  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855270  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855289  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.855311  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855376  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.855380  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45975
	I1205 20:20:22.855803  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.855841  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.856146  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856268  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856357  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.856357  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856416  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856431  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856421  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856564  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856584  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.856881  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.856900  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857036  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857064  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.857076  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857109  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857142  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.857261  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.857576  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.857597  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.857583  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.857663  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.857777  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.857803  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.857973  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.858024  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.858975  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.859013  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.859080  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.880075  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43821
	I1205 20:20:22.880778  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.881522  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.881557  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.882029  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.882640  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.882698  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.893034  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39861
	I1205 20:20:22.893560  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.894207  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.894241  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.894709  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.894896  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.904447  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I1205 20:20:22.906420  301384 addons.go:234] Setting addon default-storageclass=true in "addons-523528"
	I1205 20:20:22.906479  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.906949  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.906993  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.908485  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38287
	I1205 20:20:22.909441  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.909486  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.910323  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.910373  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.911668  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I1205 20:20:22.911791  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44585
	I1205 20:20:22.912028  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44197
	I1205 20:20:22.912107  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41857
	I1205 20:20:22.912186  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42413
	I1205 20:20:22.912307  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45779
	I1205 20:20:22.912311  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.912941  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913078  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913140  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.913161  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.913161  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913250  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.913491  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.913510  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914155  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914166  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914181  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914185  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914213  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.914326  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.914359  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.914380  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914332  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.914396  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914911  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.914929  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.914988  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.915036  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.915088  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.915341  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.915390  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.915724  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41663
	I1205 20:20:22.915861  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.916264  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38735
	I1205 20:20:22.916603  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.916619  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.916660  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.916699  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.916873  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.916912  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.917247  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.917522  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.917612  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.917674  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.917687  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.918788  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.918865  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.918878  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.918921  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.918938  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.919260  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.919399  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.919450  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.919476  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.920224  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.920294  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.920580  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.920616  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.920628  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.920752  301384 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1205 20:20:22.922080  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.922191  301384 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1205 20:20:22.922211  301384 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1205 20:20:22.922246  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.922928  301384 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1205 20:20:22.923256  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.923519  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.923813  301384 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-523528"
	I1205 20:20:22.923866  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:22.924319  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.924389  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.924912  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.924982  301384 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:20:22.924998  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1205 20:20:22.925042  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.926241  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.926311  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.926622  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.926704  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.928290  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.928321  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.928422  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1205 20:20:22.928623  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.928854  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.929055  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.929248  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.929953  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.930701  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.930735  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.930998  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.931144  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1205 20:20:22.931241  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.931460  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.931675  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.933852  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1205 20:20:22.935120  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1205 20:20:22.936364  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1205 20:20:22.937746  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1205 20:20:22.938685  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42509
	I1205 20:20:22.939274  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.939908  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.939940  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.940139  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1205 20:20:22.940393  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.940604  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.940690  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 20:20:22.941085  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.942096  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.942125  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.942436  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1205 20:20:22.942634  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.942859  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.943208  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.943447  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:22.943461  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:22.944050  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:22.944063  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:22.944076  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:22.944086  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:22.944096  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:22.944126  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1205 20:20:22.944139  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1205 20:20:22.944158  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.944332  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:22.944367  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 20:20:22.944493  301384 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1205 20:20:22.945295  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.946808  301384 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1205 20:20:22.948052  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1205 20:20:22.948072  301384 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1205 20:20:22.948105  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.949939  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.950440  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.950465  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.950713  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.950917  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.951107  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.951296  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.953217  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.953671  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.953694  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.954029  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.954268  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.954469  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.954668  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.962077  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37371
	I1205 20:20:22.962974  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.963669  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.963692  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.964153  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.964398  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.965563  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46489
	I1205 20:20:22.965820  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35847
	I1205 20:20:22.966471  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.966758  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.966872  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.967791  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.967811  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.968143  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.968669  301384 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1205 20:20:22.968731  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.968754  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.969073  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.969092  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.969172  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40223
	I1205 20:20:22.969499  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.969751  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.969821  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41731
	I1205 20:20:22.969886  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1205 20:20:22.969939  301384 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1205 20:20:22.969951  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.969965  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.970338  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.970985  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.971005  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.971093  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39991
	I1205 20:20:22.971169  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.971185  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.971591  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.971667  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.971946  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.972173  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.972191  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.972373  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:22.972424  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:22.972508  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.972580  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.972925  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.973493  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.973651  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.974640  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.974664  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.974874  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.975254  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.975351  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.975402  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.975440  301384 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1205 20:20:22.975592  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.975905  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.977401  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1205 20:20:22.977458  301384 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1205 20:20:22.977594  301384 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:20:22.977609  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1205 20:20:22.977628  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.979848  301384 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1205 20:20:22.979866  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1205 20:20:22.979890  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.981492  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:22.982435  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.983007  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.983032  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.983254  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.983601  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.983775  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.983975  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.984446  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.984471  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.984492  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.984751  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.984924  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.984959  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:22.985102  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.985262  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.985572  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40211
	I1205 20:20:22.985942  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.986503  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.986522  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.986913  301384 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:20:22.986936  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1205 20:20:22.986953  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.987798  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.988017  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.989922  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.991016  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.991503  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.991536  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.991702  301384 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1205 20:20:22.991737  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.991960  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.992134  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.992291  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.993098  301384 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:20:22.993128  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1205 20:20:22.993149  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:22.995857  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39243
	I1205 20:20:22.996413  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:22.996783  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.996931  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:22.996945  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:22.997239  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:22.997259  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:22.997483  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:22.997546  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:22.997706  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:22.997764  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:22.997847  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:22.998002  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:22.999677  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:22.999923  301384 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:20:22.999940  301384 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:20:22.999957  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.002963  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.003462  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.003492  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.003712  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.003945  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.004128  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34251
	I1205 20:20:23.004376  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.004589  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.004945  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.005700  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.005721  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.006189  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.006390  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.006794  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36773
	I1205 20:20:23.007213  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.007900  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.007919  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.008252  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.008478  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.008904  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.010508  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.010913  301384 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1205 20:20:23.012705  301384 out.go:177]   - Using image docker.io/registry:2.8.3
	I1205 20:20:23.013096  301384 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:20:23.014027  301384 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1205 20:20:23.014055  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1205 20:20:23.014080  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.014888  301384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:20:23.014910  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:20:23.014934  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.018430  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.018523  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019046  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.019063  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.019082  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019087  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.019236  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.019312  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.019583  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.019589  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.019755  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.019762  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.019901  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.019921  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.022886  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34803
	I1205 20:20:23.023392  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.023931  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.023954  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.024253  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.024531  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.026053  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43755
	I1205 20:20:23.026343  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.026685  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:23.027175  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:23.027197  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:23.027572  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:23.027895  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:23.028234  301384 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1205 20:20:23.029574  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:23.029621  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 20:20:23.029642  301384 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 20:20:23.029663  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.031218  301384 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1205 20:20:23.032470  301384 out.go:177]   - Using image docker.io/busybox:stable
	I1205 20:20:23.032873  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.033319  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.033353  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.033463  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.033659  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.033780  301384 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:20:23.033803  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1205 20:20:23.033816  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.033825  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:23.034004  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	W1205 20:20:23.034783  301384 sshutil.go:64] dial failure (will retry): ssh: handshake failed: read tcp 192.168.39.1:48478->192.168.39.217:22: read: connection reset by peer
	I1205 20:20:23.034818  301384 retry.go:31] will retry after 231.002032ms: ssh: handshake failed: read tcp 192.168.39.1:48478->192.168.39.217:22: read: connection reset by peer
	I1205 20:20:23.036751  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.037167  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:23.037197  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:23.037364  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:23.037534  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:23.037701  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:23.037837  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:23.265035  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1205 20:20:23.265064  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1205 20:20:23.333632  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1205 20:20:23.336562  301384 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:20:23.336599  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1205 20:20:23.370849  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1205 20:20:23.370883  301384 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1205 20:20:23.377223  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1205 20:20:23.408318  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1205 20:20:23.408369  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1205 20:20:23.409630  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1205 20:20:23.414987  301384 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:20:23.415052  301384 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:20:23.486741  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1205 20:20:23.486780  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1205 20:20:23.489959  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1205 20:20:23.492162  301384 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1205 20:20:23.492192  301384 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1205 20:20:23.495117  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1205 20:20:23.497936  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1205 20:20:23.534803  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:20:23.550646  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1205 20:20:23.577245  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1205 20:20:23.577287  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1205 20:20:23.587264  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1205 20:20:23.587306  301384 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1205 20:20:23.592626  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1205 20:20:23.592656  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1205 20:20:23.609863  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 20:20:23.609894  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1205 20:20:23.612246  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:20:23.679258  301384 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:20:23.679289  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1205 20:20:23.742520  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1205 20:20:23.742553  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1205 20:20:23.752818  301384 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1205 20:20:23.752852  301384 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1205 20:20:23.763910  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 20:20:23.763944  301384 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 20:20:23.801895  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1205 20:20:23.801944  301384 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1205 20:20:23.896594  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1205 20:20:23.925700  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1205 20:20:23.925741  301384 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1205 20:20:23.995700  301384 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1205 20:20:23.995743  301384 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1205 20:20:24.101367  301384 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:20:24.101400  301384 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 20:20:24.164927  301384 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:24.164967  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1205 20:20:24.173745  301384 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:20:24.173785  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1205 20:20:24.192180  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:24.228446  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1205 20:20:24.411356  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 20:20:24.416054  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1205 20:20:24.416087  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1205 20:20:24.853520  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1205 20:20:24.853549  301384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1205 20:20:25.047231  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1205 20:20:25.047264  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1205 20:20:25.410167  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1205 20:20:25.410199  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1205 20:20:25.653124  301384 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:20:25.653168  301384 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1205 20:20:26.040044  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1205 20:20:26.580253  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.2465758s)
	I1205 20:20:26.580268  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (3.203005071s)
	I1205 20:20:26.580318  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580331  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580344  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580361  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580665  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.580681  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.580785  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.580720  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.580836  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.580847  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580861  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580849  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:26.580903  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:26.580683  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.581082  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.581093  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:26.582711  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:26.582724  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:26.582745  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.658891  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (4.249208421s)
	I1205 20:20:27.658931  301384 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (4.243899975s)
	I1205 20:20:27.658970  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:27.658985  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:27.659011  301384 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (4.243930624s)
	I1205 20:20:27.659044  301384 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:20:27.659416  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:27.659449  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:27.659458  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.659467  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:27.659479  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:27.659799  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:27.659815  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:27.660098  301384 node_ready.go:35] waiting up to 6m0s for node "addons-523528" to be "Ready" ...
	I1205 20:20:27.667124  301384 node_ready.go:49] node "addons-523528" has status "Ready":"True"
	I1205 20:20:27.667154  301384 node_ready.go:38] duration metric: took 7.030782ms for node "addons-523528" to be "Ready" ...
	I1205 20:20:27.667168  301384 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:20:27.747733  301384 pod_ready.go:79] waiting up to 6m0s for pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:28.172957  301384 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-523528" context rescaled to 1 replicas
	I1205 20:20:29.938833  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1205 20:20:29.938880  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:29.942562  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:29.942666  301384 pod_ready.go:103] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:29.943037  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:29.943066  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:29.943285  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:29.943502  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:29.943676  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:29.943841  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:30.492289  301384 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1205 20:20:30.588032  301384 addons.go:234] Setting addon gcp-auth=true in "addons-523528"
	I1205 20:20:30.588107  301384 host.go:66] Checking if "addons-523528" exists ...
	I1205 20:20:30.588525  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:30.588563  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:30.606388  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33915
	I1205 20:20:30.606974  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:30.607475  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:30.607506  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:30.607873  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:30.608365  301384 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:20:30.608397  301384 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:20:30.625629  301384 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42801
	I1205 20:20:30.626322  301384 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:20:30.626921  301384 main.go:141] libmachine: Using API Version  1
	I1205 20:20:30.626945  301384 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:20:30.627428  301384 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:20:30.627636  301384 main.go:141] libmachine: (addons-523528) Calling .GetState
	I1205 20:20:30.629463  301384 main.go:141] libmachine: (addons-523528) Calling .DriverName
	I1205 20:20:30.629730  301384 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1205 20:20:30.629756  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHHostname
	I1205 20:20:30.632712  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:30.633148  301384 main.go:141] libmachine: (addons-523528) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:94:d3:2c", ip: ""} in network mk-addons-523528: {Iface:virbr1 ExpiryTime:2024-12-05 21:19:48 +0000 UTC Type:0 Mac:52:54:00:94:d3:2c Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-523528 Clientid:01:52:54:00:94:d3:2c}
	I1205 20:20:30.633185  301384 main.go:141] libmachine: (addons-523528) DBG | domain addons-523528 has defined IP address 192.168.39.217 and MAC address 52:54:00:94:d3:2c in network mk-addons-523528
	I1205 20:20:30.633405  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHPort
	I1205 20:20:30.633586  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHKeyPath
	I1205 20:20:30.633736  301384 main.go:141] libmachine: (addons-523528) Calling .GetSSHUsername
	I1205 20:20:30.633892  301384 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/addons-523528/id_rsa Username:docker}
	I1205 20:20:31.748453  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.258444545s)
	I1205 20:20:31.748488  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (8.253338676s)
	I1205 20:20:31.748510  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748522  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748533  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748547  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748590  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (8.250619477s)
	I1205 20:20:31.748624  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.213792079s)
	I1205 20:20:31.748632  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748645  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748662  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.197989723s)
	I1205 20:20:31.748688  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748705  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748650  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748723  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748734  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (7.852113907s)
	I1205 20:20:31.748749  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748704  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.136436798s)
	I1205 20:20:31.748760  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748773  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.748781  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.748856  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.55663326s)
	W1205 20:20:31.748907  301384 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:20:31.748933  301384 retry.go:31] will retry after 133.460987ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1205 20:20:31.749006  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (7.520519539s)
	I1205 20:20:31.749037  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749049  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749151  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (7.337757788s)
	I1205 20:20:31.749174  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749184  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749345  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749368  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749381  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749389  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749493  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749516  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749522  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749529  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749535  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.749618  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749634  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.749662  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.749667  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.749674  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.749680  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750001  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750016  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750026  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750034  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750046  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750060  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750080  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750087  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750100  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750106  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750107  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750114  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750117  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750120  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750155  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750170  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750188  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750193  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750200  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750205  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750338  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750366  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750373  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750380  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750386  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750472  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.750473  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750484  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750493  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.750496  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.750499  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.750503  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.750524  301384 addons.go:475] Verifying addon ingress=true in "addons-523528"
	I1205 20:20:31.750014  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751444  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751447  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.751466  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.751472  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.751495  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.751502  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753125  301384 out.go:177] * Verifying ingress addon...
	I1205 20:20:31.753562  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.753596  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753603  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753716  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753727  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753735  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.753742  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.753837  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.753865  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753872  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.753925  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.753939  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.754901  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.754935  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.754943  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.754953  301384 addons.go:475] Verifying addon metrics-server=true in "addons-523528"
	I1205 20:20:31.755470  301384 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1205 20:20:31.753952  301384 addons.go:475] Verifying addon registry=true in "addons-523528"
	I1205 20:20:31.756243  301384 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-523528 service yakd-dashboard -n yakd-dashboard
	
	I1205 20:20:31.757231  301384 out.go:177] * Verifying registry addon...
	I1205 20:20:31.759795  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1205 20:20:31.770545  301384 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1205 20:20:31.770572  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:31.783747  301384 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1205 20:20:31.783774  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:31.784823  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.784841  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.785175  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.785194  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	W1205 20:20:31.785295  301384 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1205 20:20:31.788514  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:31.788547  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:31.788844  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:31.788894  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:31.788913  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:31.883226  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1205 20:20:32.268112  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:32.278150  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:32.284379  301384 pod_ready.go:103] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:32.861233  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:32.863528  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.279441  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:33.279687  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.519151  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (7.479017617s)
	I1205 20:20:33.519212  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:33.519230  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:33.519255  301384 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.889494058s)
	I1205 20:20:33.519665  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:33.519690  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:33.519700  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:33.519709  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:33.519711  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:33.519995  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:33.520030  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:33.520041  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:33.520073  301384 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-523528"
	I1205 20:20:33.520905  301384 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1205 20:20:33.521757  301384 out.go:177] * Verifying csi-hostpath-driver addon...
	I1205 20:20:33.523464  301384 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1205 20:20:33.524313  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1205 20:20:33.524768  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1205 20:20:33.524786  301384 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1205 20:20:33.548901  301384 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1205 20:20:33.548937  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:33.616173  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1205 20:20:33.616204  301384 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1205 20:20:33.721687  301384 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:20:33.721714  301384 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1205 20:20:33.767230  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:33.769575  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:33.810018  301384 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1205 20:20:34.014481  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.131162493s)
	I1205 20:20:34.014565  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.014586  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.014928  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.014951  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.014963  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.014972  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.014985  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.015266  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.015296  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.030753  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:34.254434  301384 pod_ready.go:93] pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.254463  301384 pod_ready.go:82] duration metric: took 6.50668149s for pod "amd-gpu-device-plugin-lqd4k" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.254476  301384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.262618  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:34.265128  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:34.530430  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:34.781930  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:34.782123  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:34.800916  301384 pod_ready.go:98] pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.217 HostIPs:[{IP:192.168.39
.217}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-05 20:20:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-05 20:20:28 +0000 UTC,FinishedAt:2024-12-05 20:20:34 +0000 UTC,ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a Started:0xc002b0e1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a56920} {Name:kube-api-access-lhh9d MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveReadOnly:0xc002a56930}] User:ni
l AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 20:20:34.800950  301384 pod_ready.go:82] duration metric: took 546.466009ms for pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace to be "Ready" ...
	E1205 20:20:34.800966  301384 pod_ready.go:67] WaitExtra: waitPodCondition: pod "coredns-7c65d6cfc9-6zvjr" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:PodReadyToStartContainers Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:34 +0000 UTC Reason: Message:} {Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:23 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-12-05 20:20:22 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.3
9.217 HostIPs:[{IP:192.168.39.217}] PodIP:10.244.0.2 PodIPs:[{IP:10.244.0.2}] StartTime:2024-12-05 20:20:23 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-12-05 20:20:28 +0000 UTC,FinishedAt:2024-12-05 20:20:34 +0000 UTC,ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.11.3 ImageID:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6 ContainerID:cri-o://9d674da8c582831cce5163df7e5c092b415123aafe7c624d3fa3ccec406cc83a Started:0xc002b0e1f0 AllocatedResources:map[] Resources:nil VolumeMounts:[{Name:config-volume MountPath:/etc/coredns ReadOnly:true RecursiveReadOnly:0xc002a56920} {Name:kube-api-access-lhh9d MountPath:/var/run/secrets/kubernetes.io/serviceaccount ReadOnly:true RecursiveRe
adOnly:0xc002a56930}] User:nil AllocatedResourcesStatus:[]}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I1205 20:20:34.800979  301384 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.828197  301384 pod_ready.go:93] pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.828239  301384 pod_ready.go:82] duration metric: took 27.249622ms for pod "coredns-7c65d6cfc9-gdmlk" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.828258  301384 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.845233  301384 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.035154079s)
	I1205 20:20:34.845303  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.845346  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.845706  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.845730  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.845745  301384 main.go:141] libmachine: Making call to close driver server
	I1205 20:20:34.845754  301384 main.go:141] libmachine: (addons-523528) Calling .Close
	I1205 20:20:34.845754  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.846046  301384 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:20:34.846111  301384 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:20:34.846139  301384 main.go:141] libmachine: (addons-523528) DBG | Closing plugin on server side
	I1205 20:20:34.848495  301384 addons.go:475] Verifying addon gcp-auth=true in "addons-523528"
	I1205 20:20:34.850422  301384 out.go:177] * Verifying gcp-auth addon...
	I1205 20:20:34.852543  301384 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1205 20:20:34.855921  301384 pod_ready.go:93] pod "etcd-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.855958  301384 pod_ready.go:82] duration metric: took 27.691377ms for pod "etcd-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.855975  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.870708  301384 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1205 20:20:34.870742  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:34.898924  301384 pod_ready.go:93] pod "kube-apiserver-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:34.898969  301384 pod_ready.go:82] duration metric: took 42.984063ms for pod "kube-apiserver-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:34.898988  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.030372  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:35.053182  301384 pod_ready.go:93] pod "kube-controller-manager-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.053207  301384 pod_ready.go:82] duration metric: took 154.209666ms for pod "kube-controller-manager-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.053221  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-8xsvp" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.261705  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:35.264347  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:35.360120  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:35.451970  301384 pod_ready.go:93] pod "kube-proxy-8xsvp" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.451999  301384 pod_ready.go:82] duration metric: took 398.771201ms for pod "kube-proxy-8xsvp" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.452013  301384 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.529979  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:35.759884  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:35.763522  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:35.852104  301384 pod_ready.go:93] pod "kube-scheduler-addons-523528" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:35.852138  301384 pod_ready.go:82] duration metric: took 400.115802ms for pod "kube-scheduler-addons-523528" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.852152  301384 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:35.855297  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:36.029561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:36.260409  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:36.264734  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:36.356965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:36.528640  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:36.761057  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:36.763513  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:36.857338  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.030059  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:37.260256  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:37.263328  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:37.356684  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.528487  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:37.760263  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:37.763062  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:37.855818  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:37.858386  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:38.029924  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:38.260323  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:38.263073  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:38.357689  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:38.529154  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:38.762136  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:38.763959  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:38.856670  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.032530  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:39.259806  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:39.263484  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:39.357363  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.530057  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:39.760297  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:39.763227  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:39.856740  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:39.859837  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:40.028786  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:40.259825  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:40.262657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:40.357076  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:40.529965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:40.761640  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:40.763310  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:40.866993  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:41.029353  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:41.270034  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:41.270199  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:41.357604  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:41.529306  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:41.760554  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:41.763820  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:41.858383  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:42.028357  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:42.259389  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:42.263572  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:42.361242  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:42.367817  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:42.528993  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:42.761582  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:42.769290  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:42.856603  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:43.030926  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:43.261784  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:43.264335  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:43.357395  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:43.530218  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:43.761629  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:43.764481  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:43.858158  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.029705  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:44.259690  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:44.262996  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:44.356450  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.528807  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:44.760384  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:44.763632  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:44.856765  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:44.858659  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:45.028998  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:45.261107  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:45.263102  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:45.357564  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:45.528997  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:45.765819  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:45.766106  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:45.856267  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.030119  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:46.260808  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:46.264086  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:46.356138  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.529743  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:46.760677  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:46.763344  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:46.858044  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:46.859794  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:47.265148  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:47.265235  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:47.270246  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:47.370103  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:47.530949  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:47.764802  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:47.766657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:47.859913  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:48.029049  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:48.259696  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:48.263791  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:48.356922  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:48.757790  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:48.759868  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:48.763988  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:48.856282  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:49.028784  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:49.260517  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:49.263650  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:49.362308  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:49.364680  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:49.530645  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:49.760576  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:49.763894  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:49.858976  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:50.030000  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:50.261462  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:50.264696  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:50.356455  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:50.529095  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:50.809464  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:50.810412  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:50.856208  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.029176  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:51.259979  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:51.263026  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:51.357017  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.529130  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:51.759888  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:51.762886  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:51.856819  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:51.859926  301384 pod_ready.go:103] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"False"
	I1205 20:20:52.028366  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:52.669185  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:52.669734  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:52.670510  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:52.673426  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:52.760082  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:52.763258  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:52.859929  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.031079  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:53.260190  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:53.262892  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:53.359838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.529468  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:53.760831  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:53.763099  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:53.856238  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:53.858867  301384 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace has status "Ready":"True"
	I1205 20:20:53.858894  301384 pod_ready.go:82] duration metric: took 18.006733716s for pod "nvidia-device-plugin-daemonset-sglbw" in "kube-system" namespace to be "Ready" ...
	I1205 20:20:53.858913  301384 pod_ready.go:39] duration metric: took 26.191731772s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:20:53.858934  301384 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:20:53.858996  301384 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:20:53.876428  301384 api_server.go:72] duration metric: took 31.060707865s to wait for apiserver process to appear ...
	I1205 20:20:53.876459  301384 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:20:53.876486  301384 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I1205 20:20:53.882114  301384 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I1205 20:20:53.883268  301384 api_server.go:141] control plane version: v1.31.2
	I1205 20:20:53.883317  301384 api_server.go:131] duration metric: took 6.851237ms to wait for apiserver health ...
	I1205 20:20:53.883327  301384 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:20:53.891447  301384 system_pods.go:59] 18 kube-system pods found
	I1205 20:20:53.891485  301384 system_pods.go:61] "amd-gpu-device-plugin-lqd4k" [f46b3c00-0342-4d3b-9da8-6ee596f1cf6d] Running
	I1205 20:20:53.891490  301384 system_pods.go:61] "coredns-7c65d6cfc9-gdmlk" [35f95488-64f0-48f3-ab99-31fd21a11d75] Running
	I1205 20:20:53.891497  301384 system_pods.go:61] "csi-hostpath-attacher-0" [84675000-bf37-46d9-ab6a-6e1cb4781e25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 20:20:53.891503  301384 system_pods.go:61] "csi-hostpath-resizer-0" [a18ef2ee-6053-4bee-a9e0-8ed83cc2e964] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 20:20:53.891515  301384 system_pods.go:61] "csi-hostpathplugin-nr8m4" [a3e89b1e-6a83-4dd9-a487-29437e9207a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 20:20:53.891521  301384 system_pods.go:61] "etcd-addons-523528" [f39a6fce-120f-4e27-9c83-9449df5e8bb2] Running
	I1205 20:20:53.891525  301384 system_pods.go:61] "kube-apiserver-addons-523528" [fca0c214-9dd9-4258-8c17-24e277f7a7ea] Running
	I1205 20:20:53.891528  301384 system_pods.go:61] "kube-controller-manager-addons-523528" [78d2b799-2b90-4804-a492-db458a02fc3f] Running
	I1205 20:20:53.891532  301384 system_pods.go:61] "kube-ingress-dns-minikube" [bfef9808-b7b4-4319-ad26-b776fb27fc60] Running
	I1205 20:20:53.891536  301384 system_pods.go:61] "kube-proxy-8xsvp" [f3eb3bb7-a01c-4223-8a16-2a0ebe48726e] Running
	I1205 20:20:53.891540  301384 system_pods.go:61] "kube-scheduler-addons-523528" [8ba95eac-83e9-4a8b-bc3a-73ec04e33a78] Running
	I1205 20:20:53.891545  301384 system_pods.go:61] "metrics-server-84c5f94fbc-9sfj2" [4fb71d12-56fb-4616-bee4-29859c9f2a05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:20:53.891548  301384 system_pods.go:61] "nvidia-device-plugin-daemonset-sglbw" [0360c661-774c-46ac-a3df-fd26eb882587] Running
	I1205 20:20:53.891553  301384 system_pods.go:61] "registry-66c9cd494c-6p9nr" [911c9fc9-5e67-4b4f-846e-2ad1cdc944c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 20:20:53.891563  301384 system_pods.go:61] "registry-proxy-zpfrw" [d071b7d1-01c6-4449-98a5-0e329f71db8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 20:20:53.891569  301384 system_pods.go:61] "snapshot-controller-56fcc65765-6gsm9" [92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.891577  301384 system_pods.go:61] "snapshot-controller-56fcc65765-jpbk8" [aa2eda09-0153-4349-8efe-c65537dbe04d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.891581  301384 system_pods.go:61] "storage-provisioner" [6f2b9e6f-6263-4a11-b2bf-725c25ab3f00] Running
	I1205 20:20:53.891590  301384 system_pods.go:74] duration metric: took 8.257905ms to wait for pod list to return data ...
	I1205 20:20:53.891601  301384 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:20:53.894395  301384 default_sa.go:45] found service account: "default"
	I1205 20:20:53.894428  301384 default_sa.go:55] duration metric: took 2.816032ms for default service account to be created ...
	I1205 20:20:53.894441  301384 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:20:53.902455  301384 system_pods.go:86] 18 kube-system pods found
	I1205 20:20:53.902493  301384 system_pods.go:89] "amd-gpu-device-plugin-lqd4k" [f46b3c00-0342-4d3b-9da8-6ee596f1cf6d] Running
	I1205 20:20:53.902502  301384 system_pods.go:89] "coredns-7c65d6cfc9-gdmlk" [35f95488-64f0-48f3-ab99-31fd21a11d75] Running
	I1205 20:20:53.902511  301384 system_pods.go:89] "csi-hostpath-attacher-0" [84675000-bf37-46d9-ab6a-6e1cb4781e25] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1205 20:20:53.902518  301384 system_pods.go:89] "csi-hostpath-resizer-0" [a18ef2ee-6053-4bee-a9e0-8ed83cc2e964] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1205 20:20:53.902525  301384 system_pods.go:89] "csi-hostpathplugin-nr8m4" [a3e89b1e-6a83-4dd9-a487-29437e9207a2] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1205 20:20:53.902530  301384 system_pods.go:89] "etcd-addons-523528" [f39a6fce-120f-4e27-9c83-9449df5e8bb2] Running
	I1205 20:20:53.902535  301384 system_pods.go:89] "kube-apiserver-addons-523528" [fca0c214-9dd9-4258-8c17-24e277f7a7ea] Running
	I1205 20:20:53.902539  301384 system_pods.go:89] "kube-controller-manager-addons-523528" [78d2b799-2b90-4804-a492-db458a02fc3f] Running
	I1205 20:20:53.902549  301384 system_pods.go:89] "kube-ingress-dns-minikube" [bfef9808-b7b4-4319-ad26-b776fb27fc60] Running
	I1205 20:20:53.902553  301384 system_pods.go:89] "kube-proxy-8xsvp" [f3eb3bb7-a01c-4223-8a16-2a0ebe48726e] Running
	I1205 20:20:53.902557  301384 system_pods.go:89] "kube-scheduler-addons-523528" [8ba95eac-83e9-4a8b-bc3a-73ec04e33a78] Running
	I1205 20:20:53.902562  301384 system_pods.go:89] "metrics-server-84c5f94fbc-9sfj2" [4fb71d12-56fb-4616-bee4-29859c9f2a05] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 20:20:53.902567  301384 system_pods.go:89] "nvidia-device-plugin-daemonset-sglbw" [0360c661-774c-46ac-a3df-fd26eb882587] Running
	I1205 20:20:53.902572  301384 system_pods.go:89] "registry-66c9cd494c-6p9nr" [911c9fc9-5e67-4b4f-846e-2ad1cdc944c3] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1205 20:20:53.902577  301384 system_pods.go:89] "registry-proxy-zpfrw" [d071b7d1-01c6-4449-98a5-0e329f71db8e] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1205 20:20:53.902583  301384 system_pods.go:89] "snapshot-controller-56fcc65765-6gsm9" [92fa94c9-4e18-4cf8-82d5-9302d0d0ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.902589  301384 system_pods.go:89] "snapshot-controller-56fcc65765-jpbk8" [aa2eda09-0153-4349-8efe-c65537dbe04d] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1205 20:20:53.902593  301384 system_pods.go:89] "storage-provisioner" [6f2b9e6f-6263-4a11-b2bf-725c25ab3f00] Running
	I1205 20:20:53.902602  301384 system_pods.go:126] duration metric: took 8.154884ms to wait for k8s-apps to be running ...
	I1205 20:20:53.902613  301384 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:20:53.902663  301384 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:20:53.918116  301384 system_svc.go:56] duration metric: took 15.490461ms WaitForService to wait for kubelet
	I1205 20:20:53.918150  301384 kubeadm.go:582] duration metric: took 31.102437332s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:20:53.918172  301384 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:20:53.921623  301384 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:20:53.921681  301384 node_conditions.go:123] node cpu capacity is 2
	I1205 20:20:53.921700  301384 node_conditions.go:105] duration metric: took 3.522361ms to run NodePressure ...
	I1205 20:20:53.921718  301384 start.go:241] waiting for startup goroutines ...
	I1205 20:20:54.029644  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:54.260164  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:54.263061  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:54.355807  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:54.529526  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:54.760060  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:54.763186  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:54.856207  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:55.029331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:55.259480  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:55.264036  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:55.357437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:55.529543  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:55.759924  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:55.762657  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:55.856457  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:56.030131  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:56.260146  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:56.263422  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:56.356287  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:56.529047  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:56.759708  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:56.762874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:56.856782  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:57.029562  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:57.259888  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:57.263835  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:57.358414  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:57.530883  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:57.762236  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:57.764838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:57.855988  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:58.029412  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:58.260710  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:58.264157  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:58.356624  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:58.529983  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:58.768139  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:58.768313  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:58.856647  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:59.029699  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:59.260905  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:59.263598  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:59.360181  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:20:59.529437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:20:59.761065  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:20:59.763389  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:20:59.857314  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:00.029770  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:00.260470  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:00.263697  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:00.356605  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:00.529444  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:00.759979  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:00.763229  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:00.856099  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:01.029561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:01.260400  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:01.263228  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:01.360733  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:01.528709  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:01.761950  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:01.764628  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:01.857430  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:02.030671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:02.259928  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:02.263273  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:02.356232  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:02.529706  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:02.760962  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:02.764091  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:02.856511  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:03.029702  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:03.260604  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:03.263106  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:03.356876  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:03.529981  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:03.760484  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:03.763922  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:03.857201  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:04.029850  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:04.259913  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:04.263355  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:04.356250  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:04.529700  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:04.761088  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:04.763286  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:04.856727  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:05.028943  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:05.262932  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:05.264793  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:05.356739  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:05.529249  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:05.760609  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:05.764026  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:05.857324  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:06.033205  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:06.259181  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:06.263621  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:06.357191  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:06.529552  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:06.760458  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:06.763571  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:06.856538  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:07.030517  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:07.261446  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:07.263671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:07.357744  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:07.529562  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:07.760151  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:07.763185  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:07.856561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:08.029202  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:08.259760  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:08.263166  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:08.356242  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:08.534029  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:08.761228  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:08.763264  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:08.856874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:09.029133  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:09.261019  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:09.264640  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:09.356411  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:09.530660  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:09.760286  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:09.763275  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:09.856332  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:10.029588  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:10.261501  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:10.264665  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:10.358045  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:10.528893  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:10.761357  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:10.763023  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:10.855875  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:11.030212  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:11.259553  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:11.263097  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:11.356057  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:11.529422  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:11.760425  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:11.763811  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:11.856560  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:12.030437  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:12.260390  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:12.263965  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:12.356707  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:12.529028  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:12.759528  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:12.764143  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:12.856698  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:13.029405  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:13.260505  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:13.263736  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:13.356584  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:13.709602  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:13.761367  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:13.765119  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:13.856941  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:14.029549  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:14.260350  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:14.263692  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:14.356471  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:14.529750  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:14.760298  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:14.763017  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1205 20:21:14.855672  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:15.054838  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:15.261839  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:15.264456  301384 kapi.go:107] duration metric: took 43.504658099s to wait for kubernetes.io/minikube-addons=registry ...
	I1205 20:21:15.356144  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:15.529331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:15.759913  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:15.860349  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:16.030574  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:16.260938  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:16.357545  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:16.530846  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:16.760652  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:16.856373  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:17.030373  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:17.260361  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:17.355938  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:17.531662  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:17.760985  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:17.856033  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:18.029760  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:18.260331  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:18.355561  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:18.529204  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:18.759738  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:18.855767  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:19.029690  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:19.259345  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:19.356812  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:19.529329  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.144827  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:20.145458  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.145867  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.260407  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.356816  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:20.529118  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:20.760628  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:20.864461  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:21.029782  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:21.260445  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:21.363347  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:21.529991  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:21.760086  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:21.856874  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:22.029172  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:22.261687  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:22.358570  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:22.528751  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:22.761826  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:22.857545  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:23.032518  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:23.260623  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:23.359845  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:23.534466  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:23.759411  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:23.855676  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:24.028302  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:24.260934  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:24.356477  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:24.530941  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:24.761399  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:24.856614  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:25.030215  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:25.260231  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:25.356752  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:25.529233  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:25.765006  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:25.857337  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:26.033018  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:26.261210  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:26.356811  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:26.532496  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:26.761894  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:26.856627  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:27.029067  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:27.260555  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:27.357239  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:27.530586  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:27.760564  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:27.856761  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:28.033294  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:28.260468  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:28.361650  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:28.532653  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:28.760231  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:28.857048  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:29.029104  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:29.260114  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:29.357028  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:29.530513  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:29.760841  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:29.856132  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:30.029448  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:30.260004  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:30.355775  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:30.528768  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:30.760219  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:30.857044  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:31.033122  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:31.262953  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:31.356540  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:31.530161  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:31.759933  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:31.856331  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:32.053927  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:32.264574  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:32.363823  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:32.529308  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:32.760430  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:32.855606  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:33.029752  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:33.259470  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:33.356738  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:33.530883  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:33.760996  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:33.857040  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:34.029634  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:34.263030  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:34.357009  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:34.529408  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:34.759559  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:34.855690  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:35.028596  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:35.259944  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:35.356638  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:35.528770  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:35.760288  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:35.857327  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:36.032569  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:36.260070  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:36.356931  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:36.528839  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:36.761051  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:36.856803  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:37.028842  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1205 20:21:37.260565  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:37.356671  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:37.534654  301384 kapi.go:107] duration metric: took 1m4.010332572s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1205 20:21:37.761421  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:37.857166  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:38.262785  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:38.356328  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:38.760651  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:38.857475  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.260155  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:39.367484  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.908404  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:39.908725  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.261219  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.360465  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:40.760501  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:40.856149  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:41.260334  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:41.356085  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:41.760983  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:41.857127  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:42.260903  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:42.356967  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:42.761358  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:42.860826  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:43.260694  301384 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1205 20:21:43.356224  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:43.762746  301384 kapi.go:107] duration metric: took 1m12.007274266s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1205 20:21:43.859528  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:44.359035  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:44.856330  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:45.356709  301384 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1205 20:21:45.856711  301384 kapi.go:107] duration metric: took 1m11.004161155s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1205 20:21:45.858506  301384 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-523528 cluster.
	I1205 20:21:45.860004  301384 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1205 20:21:45.861351  301384 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1205 20:21:45.862757  301384 out.go:177] * Enabled addons: ingress-dns, amd-gpu-device-plugin, inspektor-gadget, cloud-spanner, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I1205 20:21:45.864013  301384 addons.go:510] duration metric: took 1m23.048296201s for enable addons: enabled=[ingress-dns amd-gpu-device-plugin inspektor-gadget cloud-spanner storage-provisioner nvidia-device-plugin metrics-server yakd default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I1205 20:21:45.864087  301384 start.go:246] waiting for cluster config update ...
	I1205 20:21:45.864113  301384 start.go:255] writing updated cluster config ...
	I1205 20:21:45.864420  301384 ssh_runner.go:195] Run: rm -f paused
	I1205 20:21:45.919061  301384 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:21:45.920777  301384 out.go:177] * Done! kubectl is now configured to use "addons-523528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.082426987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=91b5253b-edcf-493c-9e59-e0bc2f11faee name=/runtime.v1.RuntimeService/Version
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.083890689Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=37a813ed-7279-4891-9b74-88e248ab9cb8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.085073290Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430450085047065,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=37a813ed-7279-4891-9b74-88e248ab9cb8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.085935192Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3b4c55c4-abba-4cb6-a3de-047d24f26ae5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.086095537Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3b4c55c4-abba-4cb6-a3de-047d24f26ae5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.086440971Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6127ed45a019cfd3b8b7a20b646ca724a09ec646bf95388f513ae013f8a834ee,PodSandboxId:ec5227248f8fedab31ae3a67c7eeb80de7584caa623a81a5c7bbcbf9dad3aca7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733430321462857149,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h8h8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9eaf75f-7393-4e0d-82ea-086ee7529f08,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b
060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,PodSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173343002
7481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3b4c55c4-abba-4cb6-a3de-047d24f26ae5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.112489154Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=bda37654-e8cd-49c6-a253-70971202490d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.112813868Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ec5227248f8fedab31ae3a67c7eeb80de7584caa623a81a5c7bbcbf9dad3aca7,Metadata:&PodSandboxMetadata{Name:hello-world-app-55bf9c44b4-h8h8x,Uid:f9eaf75f-7393-4e0d-82ea-086ee7529f08,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430319100474612,Labels:map[string]string{app: hello-world-app,io.kubernetes.container.name: POD,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h8h8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9eaf75f-7393-4e0d-82ea-086ee7529f08,pod-template-hash: 55bf9c44b4,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:25:18.789954282Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&PodSandboxMetadata{Name:nginx,Uid:37992d0a-3d60-4ceb-a462-c92a92f63360,Namespace:default,Attempt:0,}
,State:SANDBOX_READY,CreatedAt:1733430177255826988,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:22:56.945756377Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&PodSandboxMetadata{Name:busybox,Uid:cfa655ea-794b-4c47-b060-9aaf959e839a,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430109320206993,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b060-9aaf959e839a,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:21:49.006731627Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7401d4bb9b521622f4
455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&PodSandboxMetadata{Name:metrics-server-84c5f94fbc-9sfj2,Uid:4fb71d12-56fb-4616-bee4-29859c9f2a05,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430029301738378,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,k8s-app: metrics-server,pod-template-hash: 84c5f94fbc,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:20:28.986693521Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&PodSandboxMetadata{Name:local-path-provisioner-86d989889c-9w5dg,Uid:67bc1df5-2b14-4874-ba56-8cf3a599f3d1,Namespace:local-path-storage,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430028937904470,Labels:map[string]string{app: local-path-provisioner,io.kubernetes.container.name: POD,
io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,pod-template-hash: 86d989889c,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:20:28.602853415Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430028677957224,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":
{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T20:20:28.061320100Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&PodSandboxMetadata{Name:amd-gpu-device-plugin-lqd4k,Uid:f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430024398244319,Labels:map[string]string{controller-r
evision-hash: 59cf7d9b45,io.kubernetes.container.name: POD,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,k8s-app: amd-gpu-device-plugin,name: amd-gpu-device-plugin,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:20:24.090068560Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&PodSandboxMetadata{Name:kube-proxy-8xsvp,Uid:f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430024191948347,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes
.io/config.seen: 2024-12-05T20:20:22.386273091Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-gdmlk,Uid:35f95488-64f0-48f3-ab99-31fd21a11d75,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430024000695678,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T20:20:23.086845559Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-addons-523528,Uid:ac2944741b4be85bb5a81c2bb9eaf1dc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:173343
0012674705510,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ac2944741b4be85bb5a81c2bb9eaf1dc,kubernetes.io/config.seen: 2024-12-05T20:20:11.998886670Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&PodSandboxMetadata{Name:kube-scheduler-addons-523528,Uid:fc604cc9b9baafaf80f6f5ed62cf5e32,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430012666780099,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,tier: control-plane,},Annotations:
map[string]string{kubernetes.io/config.hash: fc604cc9b9baafaf80f6f5ed62cf5e32,kubernetes.io/config.seen: 2024-12-05T20:20:11.998887838Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&PodSandboxMetadata{Name:etcd-addons-523528,Uid:bdf56ee58a34f9031aece6babca8cf3c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430012663079215,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.217:2379,kubernetes.io/config.hash: bdf56ee58a34f9031aece6babca8cf3c,kubernetes.io/config.seen: 2024-12-05T20:20:11.998881348Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e
1121327dd78f55bf,Metadata:&PodSandboxMetadata{Name:kube-apiserver-addons-523528,Uid:7057127c531b22b7ea450e76f9d507df,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733430012653848055,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.217:8443,kubernetes.io/config.hash: 7057127c531b22b7ea450e76f9d507df,kubernetes.io/config.seen: 2024-12-05T20:20:11.998885502Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=bda37654-e8cd-49c6-a253-70971202490d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.114092348Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3e07346a-5763-44b3-9133-5e1b016ba898 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.114184664Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3e07346a-5763-44b3-9133-5e1b016ba898 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.117364539Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6127ed45a019cfd3b8b7a20b646ca724a09ec646bf95388f513ae013f8a834ee,PodSandboxId:ec5227248f8fedab31ae3a67c7eeb80de7584caa623a81a5c7bbcbf9dad3aca7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733430321462857149,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h8h8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9eaf75f-7393-4e0d-82ea-086ee7529f08,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b
060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,PodSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173343002
7481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3e07346a-5763-44b3-9133-5e1b016ba898 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.130212320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=735439f6-0ead-44b6-bf18-54709f0d8950 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.130305214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=735439f6-0ead-44b6-bf18-54709f0d8950 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.132621423Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3cada957-f32f-4c2b-a889-382a15852adc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.133813867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430450133783553,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3cada957-f32f-4c2b-a889-382a15852adc name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.134433258Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9558dd75-324c-46c3-bd94-5c96ee3ab23f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.134488551Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9558dd75-324c-46c3-bd94-5c96ee3ab23f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.134754670Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6127ed45a019cfd3b8b7a20b646ca724a09ec646bf95388f513ae013f8a834ee,PodSandboxId:ec5227248f8fedab31ae3a67c7eeb80de7584caa623a81a5c7bbcbf9dad3aca7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733430321462857149,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h8h8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9eaf75f-7393-4e0d-82ea-086ee7529f08,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b
060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,PodSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173343002
7481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9558dd75-324c-46c3-bd94-5c96ee3ab23f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.168432413Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b9e3b4f8-6fc9-4cb8-b421-f110a108e831 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.168514765Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b9e3b4f8-6fc9-4cb8-b421-f110a108e831 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.169848387Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3385c0d8-89ca-47c5-967a-bdc02514487e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.171104512Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430450171062292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3385c0d8-89ca-47c5-967a-bdc02514487e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.171668789Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e451d04-6569-4fe4-98dd-640529b93be3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.171729554Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e451d04-6569-4fe4-98dd-640529b93be3 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:27:30 addons-523528 crio[666]: time="2024-12-05 20:27:30.171999073Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:6127ed45a019cfd3b8b7a20b646ca724a09ec646bf95388f513ae013f8a834ee,PodSandboxId:ec5227248f8fedab31ae3a67c7eeb80de7584caa623a81a5c7bbcbf9dad3aca7,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,State:CONTAINER_RUNNING,CreatedAt:1733430321462857149,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-55bf9c44b4-h8h8x,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: f9eaf75f-7393-4e0d-82ea-086ee7529f08,},Annotations:map[string]string{io.kubernetes.container.hash: 1220bd81,io.kubernetes.container.
ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:675ee915a4bfc85afc963e8156d0a4d068ff6887960ecb7df9d05b344b10e750,PodSandboxId:b3375052980f12fe1634dd2f0354c08eb05e9cac559d0113d5460f627c470aa5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66,State:CONTAINER_RUNNING,CreatedAt:1733430181181630226,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 37992d0a-3d60-4ceb-a462-c92a92f63360,},Annotations:map[string]string{io.kubernet
es.container.hash: cdfbc70a,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:567826f740b5ee62d5188f899459d74f97aec3d530b841fdbf3b6f2ccaa6324b,PodSandboxId:59491a34c5cc8d4b925acf4534c7239073f623ef0952d98f92c645c5e9712ee8,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733430111877146952,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cfa655ea-794b-4c47-b
060-9aaf959e839a,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9f9ebe589c0b2854fa84fb65906f0deb260a216e5b28a533d8f825f0554830d5,PodSandboxId:7401d4bb9b521622f4455697db4fdfdc24612775773b96777d0b1a1db8398431,Metadata:&ContainerMetadata{Name:metrics-server,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:48d9cfaaf3904a3821b1e71e50d7cbcf52fb19d5286c59e0f86b1389d189b19c,State:CONTAINER_RUNNING,CreatedAt:1733430070130758768,Labels:map[string]string{io.kubernetes.container.name: metrics-server,io.kubernetes.pod.name: metrics-server-84c5f94fbc-9sfj2,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: 4fb71d12-56fb-4616-bee4-29859c9f2a05,},Annotations:map[string]string{io.kubernetes.container.hash: d807d4fe,io.kubernetes.container.ports: [{\"name\":\"https\",\"containerPort\":4443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ebc8ffb53db3218d4d39183dbd41a16531e8865cd53d828bc55a9a65aa457c2,PodSandboxId:cf2b0a84bd0b70895b5138e9b1cfbd920a1175e663f7639c297168db716f3aab,Metadata:&ContainerMetadata{Name:local-path-provisioner,Attempt:0,},Image:&ImageSpec{Image:docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e16d1e3a1066751ebbb1d00bd843b566c69cddc5bf5f6d00edbc3fcf26a4a6bf,State:CONTAINER_RUNNING,CreatedAt:1733430068410730042,Labels:map[string]strin
g{io.kubernetes.container.name: local-path-provisioner,io.kubernetes.pod.name: local-path-provisioner-86d989889c-9w5dg,io.kubernetes.pod.namespace: local-path-storage,io.kubernetes.pod.uid: 67bc1df5-2b14-4874-ba56-8cf3a599f3d1,},Annotations:map[string]string{io.kubernetes.container.hash: d609dd0b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f79e153973b681dc2d9cb74d8bc6cb02cb39acb5004c8eb838a9744dba01edb4,PodSandboxId:743e9f09175b53d11b4319875153a9cdb44331633b30944cfe48078d54a83626,Metadata:&ContainerMetadata{Name:amd-gpu-device-plugin,Attempt:0,},Image:&ImageSpec{Image:docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d5e667c0f2bb6efe709d5abfeb749472af5cb459a5bb05d3ead8d547968c63b8,State:CONTAINER_R
UNNING,CreatedAt:1733430033026285269,Labels:map[string]string{io.kubernetes.container.name: amd-gpu-device-plugin,io.kubernetes.pod.name: amd-gpu-device-plugin-lqd4k,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f46b3c00-0342-4d3b-9da8-6ee596f1cf6d,},Annotations:map[string]string{io.kubernetes.container.hash: 1903e071,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676,PodSandboxId:a031650e9ba6969e3d5db4c602e060f790342360f03937d14bdff1091abe2cdf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUN
NING,CreatedAt:1733430029525554547,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6f2b9e6f-6263-4a11-b2bf-725c25ab3f00,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734,PodSandboxId:560cea4ce979fa604131d7990ed38119aebd1c4a69f29c3c52ac983c97169724,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:173343002
7481797422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-gdmlk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35f95488-64f0-48f3-ab99-31fd21a11d75,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1,PodSandboxId:52d1f5e64034bf24e64b84119e0f74d48f2ea4c86f6ea6603d94051f21372eab,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47
de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430024810313431,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8xsvp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f3eb3bb7-a01c-4223-8a16-2a0ebe48726e,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f,PodSandboxId:07109a4c0f1406e58819cf3fc9f22c33929c311caee75abea0b7088996f9b8d2,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[st
ring]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430012858437647,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fc604cc9b9baafaf80f6f5ed62cf5e32,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01,PodSandboxId:b50b25de86f7b83c66fcf8d1669361fcdbc4493fc2eb9a979d52c2c7756ece02,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]str
ing{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430012876781980,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac2944741b4be85bb5a81c2bb9eaf1dc,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c,PodSandboxId:db1f5053a207c54d9add1891321a10e25b1be0d369ed6c9e1121327dd78f55bf,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]st
ring{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430012849459914,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7057127c531b22b7ea450e76f9d507df,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc,PodSandboxId:3e634d2057ad72ab00442bb8067deede175795590c61e2e29660a7de660d00a2,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,
RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430012808003204,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-523528,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdf56ee58a34f9031aece6babca8cf3c,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e451d04-6569-4fe4-98dd-640529b93be3 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6127ed45a019c       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   ec5227248f8fe       hello-world-app-55bf9c44b4-h8h8x
	675ee915a4bfc       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago       Running             nginx                     0                   b3375052980f1       nginx
	567826f740b5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago       Running             busybox                   0                   59491a34c5cc8       busybox
	9f9ebe589c0b2       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   6 minutes ago       Running             metrics-server            0                   7401d4bb9b521       metrics-server-84c5f94fbc-9sfj2
	4ebc8ffb53db3       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef        6 minutes ago       Running             local-path-provisioner    0                   cf2b0a84bd0b7       local-path-provisioner-86d989889c-9w5dg
	f79e153973b68       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                6 minutes ago       Running             amd-gpu-device-plugin     0                   743e9f09175b5       amd-gpu-device-plugin-lqd4k
	15fee7cca3939       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        7 minutes ago       Running             storage-provisioner       0                   a031650e9ba69       storage-provisioner
	50ba7113ce2d7       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        7 minutes ago       Running             coredns                   0                   560cea4ce979f       coredns-7c65d6cfc9-gdmlk
	828f72fb056dc       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                                        7 minutes ago       Running             kube-proxy                0                   52d1f5e64034b       kube-proxy-8xsvp
	eb3908ffdd516       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                                        7 minutes ago       Running             kube-controller-manager   0                   b50b25de86f7b       kube-controller-manager-addons-523528
	11f971c7ed91d       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                                        7 minutes ago       Running             kube-scheduler            0                   07109a4c0f140       kube-scheduler-addons-523528
	4b4009ae66cf1       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                                        7 minutes ago       Running             kube-apiserver            0                   db1f5053a207c       kube-apiserver-addons-523528
	5a19ace4d5186       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        7 minutes ago       Running             etcd                      0                   3e634d2057ad7       etcd-addons-523528
	
	
	==> coredns [50ba7113ce2d756917b20d04479f17e3cf0c2d17dee1df17134e3031aad25734] <==
	[INFO] 10.244.0.22:60692 - 61709 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000161081s
	[INFO] 10.244.0.22:39699 - 56077 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.00014389s
	[INFO] 10.244.0.22:39699 - 31355 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00006702s
	[INFO] 10.244.0.22:60692 - 15371 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000166602s
	[INFO] 10.244.0.22:39699 - 57157 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000107828s
	[INFO] 10.244.0.22:60692 - 14007 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000183112s
	[INFO] 10.244.0.22:39699 - 20117 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000086902s
	[INFO] 10.244.0.22:60692 - 43833 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000087434s
	[INFO] 10.244.0.22:60692 - 33332 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000099101s
	[INFO] 10.244.0.22:39699 - 59901 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000118531s
	[INFO] 10.244.0.22:39699 - 63517 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00009692s
	[INFO] 10.244.0.22:53438 - 12499 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000110281s
	[INFO] 10.244.0.22:53438 - 53958 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000049423s
	[INFO] 10.244.0.22:53438 - 29094 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000032202s
	[INFO] 10.244.0.22:53438 - 31962 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000036271s
	[INFO] 10.244.0.22:53438 - 54850 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000028226s
	[INFO] 10.244.0.22:53438 - 10862 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000026916s
	[INFO] 10.244.0.22:53438 - 60547 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.00003461s
	[INFO] 10.244.0.22:38939 - 29372 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064828s
	[INFO] 10.244.0.22:38939 - 36627 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066443s
	[INFO] 10.244.0.22:38939 - 55624 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00005859s
	[INFO] 10.244.0.22:38939 - 17017 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000102159s
	[INFO] 10.244.0.22:38939 - 45066 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000037803s
	[INFO] 10.244.0.22:38939 - 32537 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047071s
	[INFO] 10.244.0.22:38939 - 19727 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000037163s
	
	
	==> describe nodes <==
	Name:               addons-523528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-523528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=addons-523528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_20_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-523528
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:20:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-523528
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:27:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:25:54 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:25:54 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:25:54 +0000   Thu, 05 Dec 2024 20:20:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:25:54 +0000   Thu, 05 Dec 2024 20:20:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-523528
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912780Ki
	  pods:               110
	System Info:
	  Machine ID:                 0dab8c2ee7ea4d5285a568609f97c654
	  System UUID:                0dab8c2e-e7ea-4d52-85a5-68609f97c654
	  Boot ID:                    149d5c20-f5be-44d7-ae12-e0ccca1b452d
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m42s
	  default                     hello-world-app-55bf9c44b4-h8h8x           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m12s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m34s
	  kube-system                 amd-gpu-device-plugin-lqd4k                0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 coredns-7c65d6cfc9-gdmlk                   100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     7m8s
	  kube-system                 etcd-addons-523528                         100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         7m13s
	  kube-system                 kube-apiserver-addons-523528               250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-controller-manager-addons-523528      200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-8xsvp                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m8s
	  kube-system                 kube-scheduler-addons-523528               100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m14s
	  kube-system                 metrics-server-84c5f94fbc-9sfj2            100m (5%)     0 (0%)      200Mi (5%)       0 (0%)         7m2s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m3s
	  local-path-storage          local-path-provisioner-86d989889c-9w5dg    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (9%)  170Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 7m3s   kube-proxy       
	  Normal  Starting                 7m13s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m13s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m13s  kubelet          Node addons-523528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m13s  kubelet          Node addons-523528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m13s  kubelet          Node addons-523528 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m12s  kubelet          Node addons-523528 status is now: NodeReady
	  Normal  RegisteredNode           7m9s   node-controller  Node addons-523528 event: Registered Node addons-523528 in Controller
	
	
	==> dmesg <==
	[  +0.080552] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.286069] systemd-fstab-generator[1328]: Ignoring "noauto" option for root device
	[  +0.145916] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.003237] kauditd_printk_skb: 114 callbacks suppressed
	[  +5.010823] kauditd_printk_skb: 125 callbacks suppressed
	[  +7.768168] kauditd_printk_skb: 93 callbacks suppressed
	[Dec 5 20:21] kauditd_printk_skb: 4 callbacks suppressed
	[  +5.119294] kauditd_printk_skb: 32 callbacks suppressed
	[  +9.756877] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.340119] kauditd_printk_skb: 37 callbacks suppressed
	[  +5.418979] kauditd_printk_skb: 34 callbacks suppressed
	[  +7.547031] kauditd_printk_skb: 24 callbacks suppressed
	[  +6.193425] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.654015] kauditd_printk_skb: 9 callbacks suppressed
	[Dec 5 20:22] kauditd_printk_skb: 2 callbacks suppressed
	[ +16.666791] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.109732] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.088388] kauditd_printk_skb: 20 callbacks suppressed
	[  +5.290445] kauditd_printk_skb: 42 callbacks suppressed
	[  +5.244076] kauditd_printk_skb: 29 callbacks suppressed
	[Dec 5 20:23] kauditd_printk_skb: 47 callbacks suppressed
	[  +8.736961] kauditd_printk_skb: 6 callbacks suppressed
	[ +19.460946] kauditd_printk_skb: 15 callbacks suppressed
	[Dec 5 20:25] kauditd_printk_skb: 49 callbacks suppressed
	[  +5.184878] kauditd_printk_skb: 19 callbacks suppressed
	
	
	==> etcd [5a19ace4d51866c451612ddffc3a6b8ebc2545d5f95a99d149c6668e91e81dcc] <==
	{"level":"info","ts":"2024-12-05T20:21:39.895706Z","caller":"traceutil/trace.go:171","msg":"trace[393603296] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1086; }","duration":"147.069582ms","start":"2024-12-05T20:21:39.748629Z","end":"2024-12-05T20:21:39.895698Z","steps":["trace[393603296] 'agreement among raft nodes before linearized reading'  (duration: 147.037287ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673418Z","caller":"traceutil/trace.go:171","msg":"trace[1542733329] linearizableReadLoop","detail":"{readStateIndex:1283; appliedIndex:1282; }","duration":"241.380976ms","start":"2024-12-05T20:22:07.432024Z","end":"2024-12-05T20:22:07.673405Z","steps":["trace[1542733329] 'read index received'  (duration: 241.255748ms)","trace[1542733329] 'applied index is now lower than readState.Index'  (duration: 124.778µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T20:22:07.673688Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"196.071748ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/validatingwebhookconfigurations/\" range_end:\"/registry/validatingwebhookconfigurations0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"warn","ts":"2024-12-05T20:22:07.673728Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"241.709041ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/gcp-auth\" ","response":"range_response_count:1 size:2279"}
	{"level":"info","ts":"2024-12-05T20:22:07.673775Z","caller":"traceutil/trace.go:171","msg":"trace[1624501571] range","detail":"{range_begin:/registry/namespaces/gcp-auth; range_end:; response_count:1; response_revision:1241; }","duration":"241.761469ms","start":"2024-12-05T20:22:07.432005Z","end":"2024-12-05T20:22:07.673767Z","steps":["trace[1624501571] 'agreement among raft nodes before linearized reading'  (duration: 241.654879ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673750Z","caller":"traceutil/trace.go:171","msg":"trace[882510960] range","detail":"{range_begin:/registry/validatingwebhookconfigurations/; range_end:/registry/validatingwebhookconfigurations0; response_count:0; response_revision:1241; }","duration":"196.151172ms","start":"2024-12-05T20:22:07.477590Z","end":"2024-12-05T20:22:07.673741Z","steps":["trace[882510960] 'agreement among raft nodes before linearized reading'  (duration: 196.046676ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:22:07.673915Z","caller":"traceutil/trace.go:171","msg":"trace[1487310495] transaction","detail":"{read_only:false; response_revision:1241; number_of_response:1; }","duration":"265.015945ms","start":"2024-12-05T20:22:07.408893Z","end":"2024-12-05T20:22:07.673908Z","steps":["trace[1487310495] 'process raft request'  (duration: 264.428323ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:22:07.674054Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"178.579171ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:22:07.674092Z","caller":"traceutil/trace.go:171","msg":"trace[1748478931] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1241; }","duration":"178.619186ms","start":"2024-12-05T20:22:07.495466Z","end":"2024-12-05T20:22:07.674086Z","steps":["trace[1748478931] 'agreement among raft nodes before linearized reading'  (duration: 178.569019ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:23:06.400849Z","caller":"traceutil/trace.go:171","msg":"trace[110845164] transaction","detail":"{read_only:false; response_revision:1576; number_of_response:1; }","duration":"135.278484ms","start":"2024-12-05T20:23:06.265553Z","end":"2024-12-05T20:23:06.400832Z","steps":["trace[110845164] 'process raft request'  (duration: 135.185293ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-05T20:23:09.190488Z","caller":"traceutil/trace.go:171","msg":"trace[593260288] linearizableReadLoop","detail":"{readStateIndex:1640; appliedIndex:1639; }","duration":"321.809997ms","start":"2024-12-05T20:23:08.868666Z","end":"2024-12-05T20:23:09.190476Z","steps":["trace[593260288] 'read index received'  (duration: 321.68409ms)","trace[593260288] 'applied index is now lower than readState.Index'  (duration: 125.509µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:23:09.190846Z","caller":"traceutil/trace.go:171","msg":"trace[1514010002] transaction","detail":"{read_only:false; response_revision:1582; number_of_response:1; }","duration":"437.101484ms","start":"2024-12-05T20:23:08.753731Z","end":"2024-12-05T20:23:09.190833Z","steps":["trace[1514010002] 'process raft request'  (duration: 436.659429ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.190978Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.753715Z","time spent":"437.197697ms","remote":"127.0.0.1:34012","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1577 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-12-05T20:23:09.191139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.468837ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:09.191233Z","caller":"traceutil/trace.go:171","msg":"trace[257734438] range","detail":"{range_begin:/registry/podtemplates/; range_end:/registry/podtemplates0; response_count:0; response_revision:1582; }","duration":"322.560998ms","start":"2024-12-05T20:23:08.868661Z","end":"2024-12-05T20:23:09.191222Z","steps":["trace[257734438] 'agreement among raft nodes before linearized reading'  (duration: 322.451886ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.191298Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.868629Z","time spent":"322.638454ms","remote":"127.0.0.1:33982","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":0,"response size":29,"request content":"key:\"/registry/podtemplates/\" range_end:\"/registry/podtemplates0\" count_only:true "}
	{"level":"warn","ts":"2024-12-05T20:23:09.191532Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"300.045882ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-12-05T20:23:09.191923Z","caller":"traceutil/trace.go:171","msg":"trace[607424640] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1582; }","duration":"300.437316ms","start":"2024-12-05T20:23:08.891478Z","end":"2024-12-05T20:23:09.191916Z","steps":["trace[607424640] 'agreement among raft nodes before linearized reading'  (duration: 299.992802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:09.191975Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T20:23:08.891447Z","time spent":"300.520087ms","remote":"127.0.0.1:34126","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":1,"response size":522,"request content":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" "}
	{"level":"info","ts":"2024-12-05T20:23:35.604283Z","caller":"traceutil/trace.go:171","msg":"trace[1626204079] linearizableReadLoop","detail":"{readStateIndex:1845; appliedIndex:1844; }","duration":"274.451344ms","start":"2024-12-05T20:23:35.329817Z","end":"2024-12-05T20:23:35.604268Z","steps":["trace[1626204079] 'read index received'  (duration: 274.284281ms)","trace[1626204079] 'applied index is now lower than readState.Index'  (duration: 166.598µs)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T20:23:35.604477Z","caller":"traceutil/trace.go:171","msg":"trace[198632349] transaction","detail":"{read_only:false; response_revision:1777; number_of_response:1; }","duration":"287.487357ms","start":"2024-12-05T20:23:35.316981Z","end":"2024-12-05T20:23:35.604469Z","steps":["trace[198632349] 'process raft request'  (duration: 287.134736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:35.604511Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"109.58891ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:35.605289Z","caller":"traceutil/trace.go:171","msg":"trace[1270703048] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1777; }","duration":"110.358928ms","start":"2024-12-05T20:23:35.494899Z","end":"2024-12-05T20:23:35.605258Z","steps":["trace[1270703048] 'agreement among raft nodes before linearized reading'  (duration: 109.575966ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T20:23:35.604555Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"274.736987ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/roles/kube-system/external-provisioner-cfg\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T20:23:35.605675Z","caller":"traceutil/trace.go:171","msg":"trace[1953490924] range","detail":"{range_begin:/registry/roles/kube-system/external-provisioner-cfg; range_end:; response_count:0; response_revision:1777; }","duration":"275.8544ms","start":"2024-12-05T20:23:35.329812Z","end":"2024-12-05T20:23:35.605666Z","steps":["trace[1953490924] 'agreement among raft nodes before linearized reading'  (duration: 274.72327ms)"],"step_count":1}
	
	
	==> kernel <==
	 20:27:30 up 7 min,  0 users,  load average: 0.32, 0.69, 0.46
	Linux addons-523528 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [4b4009ae66cf12d1b3bfd59c1995d7b0113021ea40e054a5da3dfc44cf2e5e7c] <==
	 > logger="UnhandledError"
	E1205 20:22:19.345827       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.114.167:443: connect: connection refused" logger="UnhandledError"
	E1205 20:22:19.350957       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.114.167:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.114.167:443: connect: connection refused" logger="UnhandledError"
	I1205 20:22:19.413227       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1205 20:22:30.247025       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.97.110.72"}
	I1205 20:22:56.788276       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1205 20:22:56.984468       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.3.228"}
	I1205 20:23:00.470818       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1205 20:23:01.505369       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1205 20:23:16.495663       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1205 20:23:31.928712       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.928772       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:31.969887       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.969947       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:31.984730       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:31.986954       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1205 20:23:32.025956       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1205 20:23:32.026023       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E1205 20:23:32.839522       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1205 20:23:32.969785       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	E1205 20:23:32.983794       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	W1205 20:23:33.027573       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1205 20:23:33.051517       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E1205 20:23:33.058968       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"csi-hostpathplugin-sa\" not found]"
	I1205 20:25:18.988982       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.107.0.75"}
	
	
	==> kube-controller-manager [eb3908ffdd5169d6aae507f5dc32a282ad251245ec7f6a3d751677c994276a01] <==
	E1205 20:25:27.406767       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:25:30.624136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:25:30.624404       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 20:25:33.342127       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1205 20:25:43.556973       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:25:43.557101       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1205 20:25:54.088574       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-523528"
	W1205 20:25:57.510049       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:25:57.510107       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:26:06.264545       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:26:06.264585       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:26:27.440671       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:26:27.440729       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:26:33.770527       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:26:33.770679       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:26:56.970623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:26:56.970777       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:27:03.794705       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:27:03.794865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:27:05.534813       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:27:05.535009       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:27:13.544448       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:27:13.544564       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1205 20:27:29.055361       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1205 20:27:29.055485       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [828f72fb056dc8937d52e12190d1420a8425139744c68cb3abcf59ea569478f1] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:20:27.091850       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:20:27.236026       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.217"]
	E1205 20:20:27.236136       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:20:27.339910       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:20:27.339957       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:20:27.339984       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:20:27.343597       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:20:27.343884       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:20:27.343917       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:20:27.345312       1 config.go:199] "Starting service config controller"
	I1205 20:20:27.345350       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:20:27.345473       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:20:27.345501       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:20:27.348927       1 config.go:328] "Starting node config controller"
	I1205 20:20:27.348956       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:20:27.453807       1 shared_informer.go:320] Caches are synced for node config
	I1205 20:20:27.453880       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:20:27.453903       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [11f971c7ed91d5aae89370cbedd072c2cff4765102eba00408557cb2da44fb8f] <==
	W1205 20:20:15.073233       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:15.073266       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:15.920564       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 20:20:15.920654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.011926       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.011957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.024618       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 20:20:16.024676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.033879       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:20:16.033932       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.098249       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.098373       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.158231       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 20:20:16.158313       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.279480       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 20:20:16.279535       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.333568       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.333630       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.431626       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 20:20:16.431681       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.446856       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:20:16.447312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:20:16.509932       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:20:16.510031       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 20:20:18.662076       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 20:26:17 addons-523528 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:26:18 addons-523528 kubelet[1204]: E1205 20:26:18.297952    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430378297388593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:18 addons-523528 kubelet[1204]: E1205 20:26:18.298073    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430378297388593,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:18 addons-523528 kubelet[1204]: I1205 20:26:18.326645    1204 scope.go:117] "RemoveContainer" containerID="1d0dc2039b30511564c4cf80c93ec9f7daaf9b0b8dc147b966fdbb9e4ca6521c"
	Dec 05 20:26:18 addons-523528 kubelet[1204]: I1205 20:26:18.344915    1204 scope.go:117] "RemoveContainer" containerID="465499bcf5573f091de9a68b2ab1aa1ba122170d9c84f3db29542fdc2f6276f9"
	Dec 05 20:26:28 addons-523528 kubelet[1204]: E1205 20:26:28.301028    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430388300588150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:28 addons-523528 kubelet[1204]: E1205 20:26:28.301492    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430388300588150,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:38 addons-523528 kubelet[1204]: E1205 20:26:38.304630    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430398304129626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:38 addons-523528 kubelet[1204]: E1205 20:26:38.304721    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430398304129626,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:48 addons-523528 kubelet[1204]: E1205 20:26:48.307668    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430408307120480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:48 addons-523528 kubelet[1204]: E1205 20:26:48.307714    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430408307120480,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:58 addons-523528 kubelet[1204]: E1205 20:26:58.310717    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430418310344155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:26:58 addons-523528 kubelet[1204]: E1205 20:26:58.310757    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430418310344155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:08 addons-523528 kubelet[1204]: E1205 20:27:08.315295    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430428314443670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:08 addons-523528 kubelet[1204]: E1205 20:27:08.315338    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430428314443670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:11 addons-523528 kubelet[1204]: I1205 20:27:11.834111    1204 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 05 20:27:17 addons-523528 kubelet[1204]: E1205 20:27:17.853782    1204 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:27:17 addons-523528 kubelet[1204]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:27:17 addons-523528 kubelet[1204]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:27:17 addons-523528 kubelet[1204]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:27:17 addons-523528 kubelet[1204]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:27:18 addons-523528 kubelet[1204]: E1205 20:27:18.318352    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430438317885996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:18 addons-523528 kubelet[1204]: E1205 20:27:18.318469    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430438317885996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:28 addons-523528 kubelet[1204]: E1205 20:27:28.321455    1204 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430448320865180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:27:28 addons-523528 kubelet[1204]: E1205 20:27:28.321820    1204 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733430448320865180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:604514,},InodesUsed:&UInt64Value{Value:206,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [15fee7cca3939ab2f32dfccfbc824c4223242541c860fcdb515e1397b8f81676] <==
	I1205 20:20:30.791655       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 20:20:30.810493       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 20:20:30.810550       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 20:20:30.823276       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 20:20:30.823490       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160!
	I1205 20:20:30.827816       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d97f055a-5510-42e7-b263-b69c7caf62f3", APIVersion:"v1", ResourceVersion:"646", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160 became leader
	I1205 20:20:30.926259       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-523528_bddcb9ae-604f-4650-940f-4ccd1fc44160!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-523528 -n addons-523528
helpers_test.go:261: (dbg) Run:  kubectl --context addons-523528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (302.46s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.43s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-523528
addons_test.go:170: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-523528: exit status 82 (2m0.477372841s)

                                                
                                                
-- stdout --
	* Stopping node "addons-523528"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:172: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-523528" : exit status 82
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-523528
addons_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-523528: exit status 11 (21.661827686s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:176: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-523528" : exit status 11
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-523528
addons_test.go:178: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-523528: exit status 11 (6.145118095s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:180: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-523528" : exit status 11
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-523528
addons_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-523528: exit status 11 (6.144241371s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:185: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-523528" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.43s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (141.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 node stop m02 -v=7 --alsologtostderr
E1205 20:38:57.299057  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:39:38.260765  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-689539 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.510284296s)

                                                
                                                
-- stdout --
	* Stopping node "ha-689539-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:38:46.643007  314892 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:38:46.643141  314892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:38:46.643150  314892 out.go:358] Setting ErrFile to fd 2...
	I1205 20:38:46.643155  314892 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:38:46.643347  314892 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:38:46.643676  314892 mustload.go:65] Loading cluster: ha-689539
	I1205 20:38:46.644098  314892 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:38:46.644117  314892 stop.go:39] StopHost: ha-689539-m02
	I1205 20:38:46.644477  314892 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:38:46.644539  314892 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:38:46.662045  314892 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43171
	I1205 20:38:46.662664  314892 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:38:46.663295  314892 main.go:141] libmachine: Using API Version  1
	I1205 20:38:46.663318  314892 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:38:46.663763  314892 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:38:46.666163  314892 out.go:177] * Stopping node "ha-689539-m02"  ...
	I1205 20:38:46.667564  314892 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:38:46.667616  314892 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:38:46.667892  314892 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:38:46.667931  314892 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:38:46.671321  314892 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:38:46.671814  314892 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:38:46.671854  314892 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:38:46.672063  314892 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:38:46.672291  314892 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:38:46.672623  314892 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:38:46.672795  314892 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:38:46.766603  314892 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:38:46.821551  314892 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:38:46.877002  314892 main.go:141] libmachine: Stopping "ha-689539-m02"...
	I1205 20:38:46.877060  314892 main.go:141] libmachine: (ha-689539-m02) Calling .GetState
	I1205 20:38:46.878882  314892 main.go:141] libmachine: (ha-689539-m02) Calling .Stop
	I1205 20:38:46.883384  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 0/120
	I1205 20:38:47.886139  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 1/120
	I1205 20:38:48.888566  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 2/120
	I1205 20:38:49.889865  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 3/120
	I1205 20:38:50.891288  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 4/120
	I1205 20:38:51.893600  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 5/120
	I1205 20:38:52.895132  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 6/120
	I1205 20:38:53.896553  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 7/120
	I1205 20:38:54.898280  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 8/120
	I1205 20:38:55.900653  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 9/120
	I1205 20:38:56.902392  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 10/120
	I1205 20:38:57.903767  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 11/120
	I1205 20:38:58.905302  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 12/120
	I1205 20:38:59.906720  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 13/120
	I1205 20:39:00.908511  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 14/120
	I1205 20:39:01.910749  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 15/120
	I1205 20:39:02.912394  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 16/120
	I1205 20:39:03.914213  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 17/120
	I1205 20:39:04.916625  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 18/120
	I1205 20:39:05.918287  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 19/120
	I1205 20:39:06.920321  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 20/120
	I1205 20:39:07.921884  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 21/120
	I1205 20:39:08.923417  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 22/120
	I1205 20:39:09.925209  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 23/120
	I1205 20:39:10.926846  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 24/120
	I1205 20:39:11.929242  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 25/120
	I1205 20:39:12.930734  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 26/120
	I1205 20:39:13.932600  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 27/120
	I1205 20:39:14.934146  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 28/120
	I1205 20:39:15.935619  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 29/120
	I1205 20:39:16.938069  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 30/120
	I1205 20:39:17.939482  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 31/120
	I1205 20:39:18.941926  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 32/120
	I1205 20:39:19.943527  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 33/120
	I1205 20:39:20.945025  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 34/120
	I1205 20:39:21.947226  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 35/120
	I1205 20:39:22.948565  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 36/120
	I1205 20:39:23.950047  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 37/120
	I1205 20:39:24.951644  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 38/120
	I1205 20:39:25.953215  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 39/120
	I1205 20:39:26.954732  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 40/120
	I1205 20:39:27.956689  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 41/120
	I1205 20:39:28.958208  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 42/120
	I1205 20:39:29.960378  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 43/120
	I1205 20:39:30.961955  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 44/120
	I1205 20:39:31.964145  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 45/120
	I1205 20:39:32.965616  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 46/120
	I1205 20:39:33.967468  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 47/120
	I1205 20:39:34.968746  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 48/120
	I1205 20:39:35.970338  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 49/120
	I1205 20:39:36.972440  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 50/120
	I1205 20:39:37.973953  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 51/120
	I1205 20:39:38.975408  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 52/120
	I1205 20:39:39.977086  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 53/120
	I1205 20:39:40.979129  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 54/120
	I1205 20:39:41.981643  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 55/120
	I1205 20:39:42.983852  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 56/120
	I1205 20:39:43.985886  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 57/120
	I1205 20:39:44.988077  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 58/120
	I1205 20:39:45.989958  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 59/120
	I1205 20:39:46.992546  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 60/120
	I1205 20:39:47.994252  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 61/120
	I1205 20:39:48.996687  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 62/120
	I1205 20:39:49.998826  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 63/120
	I1205 20:39:51.000781  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 64/120
	I1205 20:39:52.002971  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 65/120
	I1205 20:39:53.004555  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 66/120
	I1205 20:39:54.006207  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 67/120
	I1205 20:39:55.007715  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 68/120
	I1205 20:39:56.009234  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 69/120
	I1205 20:39:57.011902  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 70/120
	I1205 20:39:58.013643  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 71/120
	I1205 20:39:59.015510  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 72/120
	I1205 20:40:00.016979  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 73/120
	I1205 20:40:01.018463  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 74/120
	I1205 20:40:02.020615  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 75/120
	I1205 20:40:03.022122  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 76/120
	I1205 20:40:04.024507  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 77/120
	I1205 20:40:05.026088  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 78/120
	I1205 20:40:06.028430  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 79/120
	I1205 20:40:07.030156  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 80/120
	I1205 20:40:08.032314  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 81/120
	I1205 20:40:09.033896  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 82/120
	I1205 20:40:10.035583  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 83/120
	I1205 20:40:11.037157  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 84/120
	I1205 20:40:12.039344  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 85/120
	I1205 20:40:13.040718  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 86/120
	I1205 20:40:14.042040  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 87/120
	I1205 20:40:15.043822  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 88/120
	I1205 20:40:16.045054  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 89/120
	I1205 20:40:17.047109  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 90/120
	I1205 20:40:18.049060  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 91/120
	I1205 20:40:19.050694  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 92/120
	I1205 20:40:20.052591  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 93/120
	I1205 20:40:21.054042  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 94/120
	I1205 20:40:22.056139  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 95/120
	I1205 20:40:23.057656  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 96/120
	I1205 20:40:24.059365  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 97/120
	I1205 20:40:25.061104  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 98/120
	I1205 20:40:26.062994  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 99/120
	I1205 20:40:27.065113  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 100/120
	I1205 20:40:28.066531  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 101/120
	I1205 20:40:29.068653  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 102/120
	I1205 20:40:30.069970  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 103/120
	I1205 20:40:31.071543  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 104/120
	I1205 20:40:32.073720  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 105/120
	I1205 20:40:33.075205  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 106/120
	I1205 20:40:34.076718  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 107/120
	I1205 20:40:35.078220  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 108/120
	I1205 20:40:36.079501  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 109/120
	I1205 20:40:37.080729  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 110/120
	I1205 20:40:38.082279  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 111/120
	I1205 20:40:39.084551  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 112/120
	I1205 20:40:40.086138  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 113/120
	I1205 20:40:41.088434  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 114/120
	I1205 20:40:42.090672  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 115/120
	I1205 20:40:43.092518  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 116/120
	I1205 20:40:44.094115  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 117/120
	I1205 20:40:45.096431  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 118/120
	I1205 20:40:46.098445  314892 main.go:141] libmachine: (ha-689539-m02) Waiting for machine to stop 119/120
	I1205 20:40:47.099084  314892 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:40:47.099257  314892 out.go:270] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:367: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-689539 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
E1205 20:41:00.182363  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:371: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr: (18.671665742s)
ha_test.go:377: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:386: status says not two apiservers are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (1.405906642s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m03_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:34:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:34:08.074114  310801 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:34:08.074261  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074272  310801 out.go:358] Setting ErrFile to fd 2...
	I1205 20:34:08.074277  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074494  310801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:34:08.075118  310801 out.go:352] Setting JSON to false
	I1205 20:34:08.076226  310801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11796,"bootTime":1733419052,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:34:08.076305  310801 start.go:139] virtualization: kvm guest
	I1205 20:34:08.078657  310801 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:34:08.080623  310801 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:34:08.080628  310801 notify.go:220] Checking for updates...
	I1205 20:34:08.083473  310801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:34:08.084883  310801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:08.086219  310801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.087594  310801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:34:08.088859  310801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:34:08.090289  310801 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:34:08.128174  310801 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:34:08.129457  310801 start.go:297] selected driver: kvm2
	I1205 20:34:08.129474  310801 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:34:08.129492  310801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:34:08.130313  310801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.130391  310801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:34:08.148061  310801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:34:08.148119  310801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:34:08.148394  310801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:34:08.148426  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:08.148467  310801 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 20:34:08.148479  310801 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:34:08.148546  310801 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:08.148670  310801 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.150579  310801 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:34:08.152101  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:08.152144  310801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:34:08.152158  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:08.152281  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:08.152296  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:08.152605  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:08.152651  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json: {Name:mk27baab499187c123d1f411d3400f014a73dd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:08.152842  310801 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:08.152881  310801 start.go:364] duration metric: took 21.06µs to acquireMachinesLock for "ha-689539"
	I1205 20:34:08.152908  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:08.152972  310801 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:34:08.154751  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:08.154908  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:08.154972  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:08.170934  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I1205 20:34:08.171495  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:08.172063  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:08.172087  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:08.172451  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:08.172674  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:08.172837  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:08.172996  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:08.173045  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:08.173086  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:08.173121  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173139  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173198  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:08.173225  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173243  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173268  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:08.173282  310801 main.go:141] libmachine: (ha-689539) Calling .PreCreateCheck
	I1205 20:34:08.173629  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:08.174111  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:08.174129  310801 main.go:141] libmachine: (ha-689539) Calling .Create
	I1205 20:34:08.174265  310801 main.go:141] libmachine: (ha-689539) Creating KVM machine...
	I1205 20:34:08.175744  310801 main.go:141] libmachine: (ha-689539) DBG | found existing default KVM network
	I1205 20:34:08.176445  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.176315  310824 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I1205 20:34:08.176491  310801 main.go:141] libmachine: (ha-689539) DBG | created network xml: 
	I1205 20:34:08.176507  310801 main.go:141] libmachine: (ha-689539) DBG | <network>
	I1205 20:34:08.176530  310801 main.go:141] libmachine: (ha-689539) DBG |   <name>mk-ha-689539</name>
	I1205 20:34:08.176545  310801 main.go:141] libmachine: (ha-689539) DBG |   <dns enable='no'/>
	I1205 20:34:08.176564  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176591  310801 main.go:141] libmachine: (ha-689539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:34:08.176606  310801 main.go:141] libmachine: (ha-689539) DBG |     <dhcp>
	I1205 20:34:08.176611  310801 main.go:141] libmachine: (ha-689539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:34:08.176616  310801 main.go:141] libmachine: (ha-689539) DBG |     </dhcp>
	I1205 20:34:08.176621  310801 main.go:141] libmachine: (ha-689539) DBG |   </ip>
	I1205 20:34:08.176666  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176693  310801 main.go:141] libmachine: (ha-689539) DBG | </network>
	I1205 20:34:08.176707  310801 main.go:141] libmachine: (ha-689539) DBG | 
	I1205 20:34:08.181749  310801 main.go:141] libmachine: (ha-689539) DBG | trying to create private KVM network mk-ha-689539 192.168.39.0/24...
	I1205 20:34:08.259729  310801 main.go:141] libmachine: (ha-689539) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.259779  310801 main.go:141] libmachine: (ha-689539) DBG | private KVM network mk-ha-689539 192.168.39.0/24 created
	I1205 20:34:08.259792  310801 main.go:141] libmachine: (ha-689539) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:08.259831  310801 main.go:141] libmachine: (ha-689539) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:08.259902  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.259565  310824 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.570701  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.570509  310824 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa...
	I1205 20:34:08.656946  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656740  310824 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk...
	I1205 20:34:08.656979  310801 main.go:141] libmachine: (ha-689539) DBG | Writing magic tar header
	I1205 20:34:08.656999  310801 main.go:141] libmachine: (ha-689539) DBG | Writing SSH key tar header
	I1205 20:34:08.657012  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656919  310824 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.657032  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539
	I1205 20:34:08.657155  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 (perms=drwx------)
	I1205 20:34:08.657196  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:08.657214  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:08.657237  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:08.657251  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:08.657266  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.657283  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:08.657297  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:08.657313  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:08.657327  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home
	I1205 20:34:08.657340  310801 main.go:141] libmachine: (ha-689539) DBG | Skipping /home - not owner
	I1205 20:34:08.657354  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:08.657370  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:08.657383  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:08.658677  310801 main.go:141] libmachine: (ha-689539) define libvirt domain using xml: 
	I1205 20:34:08.658706  310801 main.go:141] libmachine: (ha-689539) <domain type='kvm'>
	I1205 20:34:08.658718  310801 main.go:141] libmachine: (ha-689539)   <name>ha-689539</name>
	I1205 20:34:08.658725  310801 main.go:141] libmachine: (ha-689539)   <memory unit='MiB'>2200</memory>
	I1205 20:34:08.658735  310801 main.go:141] libmachine: (ha-689539)   <vcpu>2</vcpu>
	I1205 20:34:08.658745  310801 main.go:141] libmachine: (ha-689539)   <features>
	I1205 20:34:08.658752  310801 main.go:141] libmachine: (ha-689539)     <acpi/>
	I1205 20:34:08.658759  310801 main.go:141] libmachine: (ha-689539)     <apic/>
	I1205 20:34:08.658767  310801 main.go:141] libmachine: (ha-689539)     <pae/>
	I1205 20:34:08.658787  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.658823  310801 main.go:141] libmachine: (ha-689539)   </features>
	I1205 20:34:08.658849  310801 main.go:141] libmachine: (ha-689539)   <cpu mode='host-passthrough'>
	I1205 20:34:08.658858  310801 main.go:141] libmachine: (ha-689539)   
	I1205 20:34:08.658863  310801 main.go:141] libmachine: (ha-689539)   </cpu>
	I1205 20:34:08.658869  310801 main.go:141] libmachine: (ha-689539)   <os>
	I1205 20:34:08.658874  310801 main.go:141] libmachine: (ha-689539)     <type>hvm</type>
	I1205 20:34:08.658880  310801 main.go:141] libmachine: (ha-689539)     <boot dev='cdrom'/>
	I1205 20:34:08.658885  310801 main.go:141] libmachine: (ha-689539)     <boot dev='hd'/>
	I1205 20:34:08.658892  310801 main.go:141] libmachine: (ha-689539)     <bootmenu enable='no'/>
	I1205 20:34:08.658896  310801 main.go:141] libmachine: (ha-689539)   </os>
	I1205 20:34:08.658902  310801 main.go:141] libmachine: (ha-689539)   <devices>
	I1205 20:34:08.658909  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='cdrom'>
	I1205 20:34:08.658920  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/boot2docker.iso'/>
	I1205 20:34:08.658932  310801 main.go:141] libmachine: (ha-689539)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:08.658940  310801 main.go:141] libmachine: (ha-689539)       <readonly/>
	I1205 20:34:08.658954  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.658974  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='disk'>
	I1205 20:34:08.658987  310801 main.go:141] libmachine: (ha-689539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:08.659004  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk'/>
	I1205 20:34:08.659016  310801 main.go:141] libmachine: (ha-689539)       <target dev='hda' bus='virtio'/>
	I1205 20:34:08.659054  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.659076  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659087  310801 main.go:141] libmachine: (ha-689539)       <source network='mk-ha-689539'/>
	I1205 20:34:08.659094  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659106  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659117  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659126  310801 main.go:141] libmachine: (ha-689539)       <source network='default'/>
	I1205 20:34:08.659140  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659151  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659160  310801 main.go:141] libmachine: (ha-689539)     <serial type='pty'>
	I1205 20:34:08.659167  310801 main.go:141] libmachine: (ha-689539)       <target port='0'/>
	I1205 20:34:08.659176  310801 main.go:141] libmachine: (ha-689539)     </serial>
	I1205 20:34:08.659185  310801 main.go:141] libmachine: (ha-689539)     <console type='pty'>
	I1205 20:34:08.659196  310801 main.go:141] libmachine: (ha-689539)       <target type='serial' port='0'/>
	I1205 20:34:08.659214  310801 main.go:141] libmachine: (ha-689539)     </console>
	I1205 20:34:08.659224  310801 main.go:141] libmachine: (ha-689539)     <rng model='virtio'>
	I1205 20:34:08.659233  310801 main.go:141] libmachine: (ha-689539)       <backend model='random'>/dev/random</backend>
	I1205 20:34:08.659242  310801 main.go:141] libmachine: (ha-689539)     </rng>
	I1205 20:34:08.659248  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659252  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659260  310801 main.go:141] libmachine: (ha-689539)   </devices>
	I1205 20:34:08.659270  310801 main.go:141] libmachine: (ha-689539) </domain>
	I1205 20:34:08.659282  310801 main.go:141] libmachine: (ha-689539) 
	I1205 20:34:08.664073  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:a3:09:de in network default
	I1205 20:34:08.664657  310801 main.go:141] libmachine: (ha-689539) Ensuring networks are active...
	I1205 20:34:08.664680  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:08.665393  310801 main.go:141] libmachine: (ha-689539) Ensuring network default is active
	I1205 20:34:08.665790  310801 main.go:141] libmachine: (ha-689539) Ensuring network mk-ha-689539 is active
	I1205 20:34:08.666343  310801 main.go:141] libmachine: (ha-689539) Getting domain xml...
	I1205 20:34:08.667190  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:09.889755  310801 main.go:141] libmachine: (ha-689539) Waiting to get IP...
	I1205 20:34:09.890610  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:09.890981  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:09.891034  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:09.890969  310824 retry.go:31] will retry after 284.885869ms: waiting for machine to come up
	I1205 20:34:10.177621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.178156  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.178184  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.178109  310824 retry.go:31] will retry after 378.211833ms: waiting for machine to come up
	I1205 20:34:10.557655  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.558178  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.558212  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.558123  310824 retry.go:31] will retry after 473.788163ms: waiting for machine to come up
	I1205 20:34:11.033830  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.034246  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.034277  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.034195  310824 retry.go:31] will retry after 418.138315ms: waiting for machine to come up
	I1205 20:34:11.453849  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.454287  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.454318  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.454229  310824 retry.go:31] will retry after 720.041954ms: waiting for machine to come up
	I1205 20:34:12.176162  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.176610  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.176635  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.176551  310824 retry.go:31] will retry after 769.230458ms: waiting for machine to come up
	I1205 20:34:12.947323  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.947645  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.947682  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.947615  310824 retry.go:31] will retry after 799.111179ms: waiting for machine to come up
	I1205 20:34:13.748171  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:13.748640  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:13.748669  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:13.748592  310824 retry.go:31] will retry after 1.052951937s: waiting for machine to come up
	I1205 20:34:14.802913  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:14.803309  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:14.803340  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:14.803262  310824 retry.go:31] will retry after 1.685899285s: waiting for machine to come up
	I1205 20:34:16.491286  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:16.491828  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:16.491858  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:16.491779  310824 retry.go:31] will retry after 1.722453601s: waiting for machine to come up
	I1205 20:34:18.215846  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:18.216281  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:18.216316  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:18.216229  310824 retry.go:31] will retry after 1.847118783s: waiting for machine to come up
	I1205 20:34:20.066408  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:20.066971  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:20.067002  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:20.066922  310824 retry.go:31] will retry after 2.216585531s: waiting for machine to come up
	I1205 20:34:22.284845  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:22.285380  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:22.285409  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:22.285296  310824 retry.go:31] will retry after 4.35742756s: waiting for machine to come up
	I1205 20:34:26.646498  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:26.646898  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:26.646925  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:26.646863  310824 retry.go:31] will retry after 4.830110521s: waiting for machine to come up
	I1205 20:34:31.481950  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.482551  310801 main.go:141] libmachine: (ha-689539) Found IP for machine: 192.168.39.220
	I1205 20:34:31.482584  310801 main.go:141] libmachine: (ha-689539) Reserving static IP address...
	I1205 20:34:31.482599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has current primary IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.483029  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find host DHCP lease matching {name: "ha-689539", mac: "52:54:00:92:19:fb", ip: "192.168.39.220"} in network mk-ha-689539
	I1205 20:34:31.565523  310801 main.go:141] libmachine: (ha-689539) Reserved static IP address: 192.168.39.220
	I1205 20:34:31.565552  310801 main.go:141] libmachine: (ha-689539) Waiting for SSH to be available...
	I1205 20:34:31.565561  310801 main.go:141] libmachine: (ha-689539) DBG | Getting to WaitForSSH function...
	I1205 20:34:31.568330  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568827  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.568862  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568958  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH client type: external
	I1205 20:34:31.568991  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa (-rw-------)
	I1205 20:34:31.569027  310801 main.go:141] libmachine: (ha-689539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:34:31.569037  310801 main.go:141] libmachine: (ha-689539) DBG | About to run SSH command:
	I1205 20:34:31.569050  310801 main.go:141] libmachine: (ha-689539) DBG | exit 0
	I1205 20:34:31.694133  310801 main.go:141] libmachine: (ha-689539) DBG | SSH cmd err, output: <nil>: 
	I1205 20:34:31.694455  310801 main.go:141] libmachine: (ha-689539) KVM machine creation complete!
	I1205 20:34:31.694719  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:31.695354  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695562  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695749  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:34:31.695765  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:31.697139  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:34:31.697166  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:34:31.697171  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:34:31.697176  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.699900  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700272  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.700328  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700454  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.700642  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700807  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700983  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.701155  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.701416  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.701430  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:34:31.797327  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:31.797354  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:34:31.797363  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.800489  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.800822  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.800853  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.801025  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.801240  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801464  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801591  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.801777  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.801991  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.802002  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:34:31.902674  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:34:31.902768  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:34:31.902779  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:34:31.902787  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903088  310801 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:34:31.903116  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903428  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.906237  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906571  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.906599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906752  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.906940  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907099  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907232  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.907446  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.907634  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.907655  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:34:32.020236  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:34:32.020265  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.023604  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.023912  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.023942  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.024133  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.024345  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024501  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024686  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.024863  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.025085  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.025111  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:34:32.131661  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:32.131696  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:34:32.131742  310801 buildroot.go:174] setting up certificates
	I1205 20:34:32.131755  310801 provision.go:84] configureAuth start
	I1205 20:34:32.131768  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:32.132088  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.135389  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.135787  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.135825  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.136069  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.138588  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.138916  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.138949  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.139086  310801 provision.go:143] copyHostCerts
	I1205 20:34:32.139123  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139178  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:34:32.139206  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139295  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:34:32.139433  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139460  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:34:32.139468  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139515  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:34:32.139597  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139626  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:34:32.139634  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139671  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:34:32.139758  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:34:32.367430  310801 provision.go:177] copyRemoteCerts
	I1205 20:34:32.367531  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:34:32.367565  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.370702  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371025  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.371063  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371206  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.371413  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.371586  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.371717  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.452327  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:34:32.452426  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:34:32.476869  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:34:32.476958  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:34:32.501389  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:34:32.501501  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:34:32.525226  310801 provision.go:87] duration metric: took 393.452946ms to configureAuth
	I1205 20:34:32.525267  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:34:32.525488  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:32.525609  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.528470  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.528833  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.528864  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.529057  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.529285  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529497  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529678  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.529839  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.530046  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.530066  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:34:32.733723  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:34:32.733755  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:34:32.733816  310801 main.go:141] libmachine: (ha-689539) Calling .GetURL
	I1205 20:34:32.735231  310801 main.go:141] libmachine: (ha-689539) DBG | Using libvirt version 6000000
	I1205 20:34:32.737329  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737769  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.737804  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737993  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:34:32.738008  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:34:32.738015  310801 client.go:171] duration metric: took 24.564959064s to LocalClient.Create
	I1205 20:34:32.738046  310801 start.go:167] duration metric: took 24.565052554s to libmachine.API.Create "ha-689539"
	I1205 20:34:32.738061  310801 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:34:32.738073  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:34:32.738096  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.738400  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:34:32.738433  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.740621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.740891  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.740921  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.741034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.741256  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.741431  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.741595  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.820810  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:34:32.825193  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:34:32.825227  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:34:32.825326  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:34:32.825428  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:34:32.825442  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:34:32.825556  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:34:32.835549  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:32.859405  310801 start.go:296] duration metric: took 121.327589ms for postStartSetup
	I1205 20:34:32.859464  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:32.860144  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.862916  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863271  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.863303  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863582  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:32.863831  310801 start.go:128] duration metric: took 24.710845565s to createHost
	I1205 20:34:32.863871  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.866291  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866627  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.866656  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866902  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.867141  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867419  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867570  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.867744  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.867965  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.867993  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:34:32.966710  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430872.933221119
	
	I1205 20:34:32.966748  310801 fix.go:216] guest clock: 1733430872.933221119
	I1205 20:34:32.966760  310801 fix.go:229] Guest: 2024-12-05 20:34:32.933221119 +0000 UTC Remote: 2024-12-05 20:34:32.863851557 +0000 UTC m=+24.831728555 (delta=69.369562ms)
	I1205 20:34:32.966789  310801 fix.go:200] guest clock delta is within tolerance: 69.369562ms
	I1205 20:34:32.966794  310801 start.go:83] releasing machines lock for "ha-689539", held for 24.813901478s
	I1205 20:34:32.966815  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.967103  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.970285  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970747  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.970797  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970954  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971526  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971766  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971872  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:34:32.971926  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.972023  310801 ssh_runner.go:195] Run: cat /version.json
	I1205 20:34:32.972052  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.975300  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975606  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975666  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.975696  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975901  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976142  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976160  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.976211  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.976432  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976440  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.976647  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.976668  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976855  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.977003  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:33.059386  310801 ssh_runner.go:195] Run: systemctl --version
	I1205 20:34:33.082247  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:34:33.243513  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:34:33.249633  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:34:33.249718  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:34:33.266578  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:34:33.266607  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:34:33.266691  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:34:33.282457  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:34:33.296831  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:34:33.296976  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:34:33.310872  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:34:33.324245  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:34:33.436767  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:34:33.589248  310801 docker.go:233] disabling docker service ...
	I1205 20:34:33.589369  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:34:33.604397  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:34:33.617678  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:34:33.755936  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:34:33.876879  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:34:33.890218  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:34:33.907910  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:34:33.907992  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.918057  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:34:33.918138  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.928622  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.938873  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.949059  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:34:33.959639  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.970025  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.986937  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.997151  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:34:34.006323  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:34:34.006391  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:34:34.019434  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:34:34.029027  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:34.156535  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:34:34.246656  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:34:34.246735  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:34:34.251273  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:34:34.251340  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:34:34.254861  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:34:34.290093  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:34:34.290181  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.319140  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.349724  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:34:34.351134  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:34.354155  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354477  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:34.354499  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354753  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:34:34.358724  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:34.371098  310801 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:34:34.371240  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:34.371296  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:34.405312  310801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:34:34.405419  310801 ssh_runner.go:195] Run: which lz4
	I1205 20:34:34.409438  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:34:34.409558  310801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:34:34.413636  310801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:34:34.413680  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:34:35.688964  310801 crio.go:462] duration metric: took 1.279440398s to copy over tarball
	I1205 20:34:35.689045  310801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:34:37.772729  310801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083628711s)
	I1205 20:34:37.772773  310801 crio.go:469] duration metric: took 2.083775707s to extract the tarball
	I1205 20:34:37.772784  310801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:34:37.810322  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:37.853195  310801 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:34:37.853229  310801 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:34:37.853239  310801 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:34:37.853389  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:34:37.853483  310801 ssh_runner.go:195] Run: crio config
	I1205 20:34:37.904941  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:37.904967  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:37.904981  310801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:34:37.905015  310801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:34:37.905154  310801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:34:37.905183  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:34:37.905229  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:34:37.920877  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:34:37.921012  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:34:37.921087  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:34:37.930861  310801 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:34:37.930952  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:34:37.940283  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:34:37.956877  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:34:37.973504  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:34:37.990145  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 20:34:38.006265  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:34:38.010189  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:38.022257  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:38.140067  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:34:38.157890  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:34:38.157932  310801 certs.go:194] generating shared ca certs ...
	I1205 20:34:38.157956  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.158149  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:34:38.158208  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:34:38.158222  310801 certs.go:256] generating profile certs ...
	I1205 20:34:38.158295  310801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:34:38.158314  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt with IP's: []
	I1205 20:34:38.310974  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt ...
	I1205 20:34:38.311018  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt: {Name:mkf3aecb8b9ad227608c6977c2ad30cfc55949b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311241  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key ...
	I1205 20:34:38.311266  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key: {Name:mkfab3a0d79e1baa864757b84edfb7968d976df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311382  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772
	I1205 20:34:38.311402  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I1205 20:34:38.414671  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 ...
	I1205 20:34:38.414714  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772: {Name:mkc29737ec8270e2af482fa3e0afb3df1551e296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.414925  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 ...
	I1205 20:34:38.414944  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772: {Name:mk5a1762b7078753229c19ae4d408dd983181bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.415108  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:34:38.415228  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:34:38.415320  310801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:34:38.415337  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt with IP's: []
	I1205 20:34:38.595265  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt ...
	I1205 20:34:38.595307  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt: {Name:mke4b60d010e9a42985a4147d8ca20fd58cfe926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595513  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key ...
	I1205 20:34:38.595526  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key: {Name:mkc40847c87fbb64accdbdfed18b0a1220dd4fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595607  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:34:38.595627  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:34:38.595641  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:34:38.595656  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:34:38.595671  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:34:38.595687  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:34:38.595702  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:34:38.595721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:34:38.595781  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:34:38.595820  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:34:38.595832  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:34:38.595867  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:34:38.595927  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:34:38.595965  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:34:38.596013  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:38.596047  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.596065  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.596080  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.596679  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:34:38.621836  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:34:38.645971  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:34:38.669572  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:34:38.692394  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:34:38.714950  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:34:38.737673  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:34:38.760143  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:34:38.782837  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:34:38.804959  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:34:38.827699  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:34:38.850292  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:34:38.866443  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:34:38.872267  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:34:38.883530  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887895  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887977  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.893617  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:34:38.906999  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:34:38.918595  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924117  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924185  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.932047  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:34:38.945495  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:34:38.961962  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966385  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966443  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.971854  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:34:38.983000  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:34:38.987127  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:34:38.987198  310801 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:38.987278  310801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:34:38.987360  310801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:34:39.023266  310801 cri.go:89] found id: ""
	I1205 20:34:39.023363  310801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:34:39.033877  310801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:34:39.044224  310801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:34:39.054571  310801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:34:39.054597  310801 kubeadm.go:157] found existing configuration files:
	
	I1205 20:34:39.054653  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:34:39.064431  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:34:39.064513  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:34:39.074366  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:34:39.083912  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:34:39.083984  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:34:39.093938  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.103398  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:34:39.103465  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.113094  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:34:39.122507  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:34:39.122597  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:34:39.132005  310801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:34:39.228908  310801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:34:39.229049  310801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:34:39.329735  310801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:34:39.329925  310801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:34:39.330069  310801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:34:39.340103  310801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:34:39.373910  310801 out.go:235]   - Generating certificates and keys ...
	I1205 20:34:39.374072  310801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:34:39.374147  310801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:34:39.462096  310801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:34:39.625431  310801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:34:39.899737  310801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:34:40.026923  310801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:34:40.326605  310801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:34:40.326736  310801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:40.487273  310801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:34:40.487463  310801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:41.025029  310801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:34:41.081102  310801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:34:41.372777  310801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:34:41.372851  310801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:34:41.470469  310801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:34:41.550016  310801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:34:41.829563  310801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:34:41.903888  310801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:34:42.075688  310801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:34:42.076191  310801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:34:42.079642  310801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:34:42.116791  310801 out.go:235]   - Booting up control plane ...
	I1205 20:34:42.116956  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:34:42.117092  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:34:42.117208  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:34:42.117347  310801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:34:42.117444  310801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:34:42.117492  310801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:34:42.242074  310801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:34:42.242211  310801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:34:42.743099  310801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.406858ms
	I1205 20:34:42.743201  310801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:34:48.715396  310801 kubeadm.go:310] [api-check] The API server is healthy after 5.976028105s
	I1205 20:34:48.727254  310801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:34:48.744015  310801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:34:49.271812  310801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:34:49.272046  310801 kubeadm.go:310] [mark-control-plane] Marking the node ha-689539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:34:49.283178  310801 kubeadm.go:310] [bootstrap-token] Using token: ynd0vv.39hctrjjdwln7xrk
	I1205 20:34:49.284635  310801 out.go:235]   - Configuring RBAC rules ...
	I1205 20:34:49.284805  310801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:34:49.298869  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:34:49.307342  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:34:49.311034  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:34:49.314220  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:34:49.318275  310801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:34:49.336336  310801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:34:49.603608  310801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:34:50.123229  310801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:34:50.123255  310801 kubeadm.go:310] 
	I1205 20:34:50.123360  310801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:34:50.123388  310801 kubeadm.go:310] 
	I1205 20:34:50.123496  310801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:34:50.123533  310801 kubeadm.go:310] 
	I1205 20:34:50.123584  310801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:34:50.123672  310801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:34:50.123755  310801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:34:50.123771  310801 kubeadm.go:310] 
	I1205 20:34:50.123856  310801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:34:50.123868  310801 kubeadm.go:310] 
	I1205 20:34:50.123942  310801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:34:50.123957  310801 kubeadm.go:310] 
	I1205 20:34:50.124045  310801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:34:50.124156  310801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:34:50.124256  310801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:34:50.124269  310801 kubeadm.go:310] 
	I1205 20:34:50.124397  310801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:34:50.124510  310801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:34:50.124522  310801 kubeadm.go:310] 
	I1205 20:34:50.124645  310801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.124778  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:34:50.124879  310801 kubeadm.go:310] 	--control-plane 
	I1205 20:34:50.124896  310801 kubeadm.go:310] 
	I1205 20:34:50.125023  310801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:34:50.125040  310801 kubeadm.go:310] 
	I1205 20:34:50.125138  310801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.125303  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:34:50.125442  310801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:34:50.125462  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:50.125470  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:50.127293  310801 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:34:50.128597  310801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:34:50.133712  310801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:34:50.133735  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:34:50.151910  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:34:50.498891  310801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:34:50.498983  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:50.498995  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539 minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=true
	I1205 20:34:50.513638  310801 ops.go:34] apiserver oom_adj: -16
	I1205 20:34:50.590747  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.091486  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.591491  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.091553  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.591289  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.091686  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.194917  310801 kubeadm.go:1113] duration metric: took 2.696013148s to wait for elevateKubeSystemPrivileges
	I1205 20:34:53.194977  310801 kubeadm.go:394] duration metric: took 14.207781964s to StartCluster
	I1205 20:34:53.195006  310801 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.195117  310801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.198426  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.198793  310801 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.198831  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:34:53.198863  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:34:53.198850  310801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:34:53.198946  310801 addons.go:69] Setting storage-provisioner=true in profile "ha-689539"
	I1205 20:34:53.198964  310801 addons.go:69] Setting default-storageclass=true in profile "ha-689539"
	I1205 20:34:53.198979  310801 addons.go:234] Setting addon storage-provisioner=true in "ha-689539"
	I1205 20:34:53.198988  310801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-689539"
	I1205 20:34:53.199021  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.199090  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.199551  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199570  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199599  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.199609  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.215764  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1205 20:34:53.216062  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I1205 20:34:53.216436  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.216527  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.217017  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217050  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217168  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217198  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217403  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217563  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217568  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.218173  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.218228  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.219954  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.220226  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:34:53.220737  310801 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 20:34:53.220963  310801 addons.go:234] Setting addon default-storageclass=true in "ha-689539"
	I1205 20:34:53.221000  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.221268  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.221303  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.235358  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I1205 20:34:53.235938  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.236563  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.236595  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.236975  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.237206  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.237645  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1205 20:34:53.238195  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.238727  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.238753  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.239124  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.239183  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.239643  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.239697  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.241617  310801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:34:53.243036  310801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.243058  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:34:53.243080  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.247044  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247514  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.247542  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247718  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.248011  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.248218  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.248413  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.257997  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I1205 20:34:53.258521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.259183  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.259218  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.259691  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.259961  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.262068  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.262345  310801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.262363  310801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:34:53.262386  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.265363  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.265818  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.265848  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.266018  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.266213  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.266327  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.266435  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.311906  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:34:53.428778  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.457287  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.655441  310801 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:34:53.958432  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958460  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958502  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958541  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958824  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958842  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958852  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958860  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958920  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.958929  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958944  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958951  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958957  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.959133  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959149  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959214  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.959271  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959300  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959388  310801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:34:53.959421  310801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:34:53.959540  310801 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:34:53.959549  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.959559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.959569  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.981877  310801 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1205 20:34:53.982523  310801 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:34:53.982543  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.982553  310801 round_trippers.go:473]     Content-Type: application/json
	I1205 20:34:53.982558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.982562  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.985387  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:34:53.985542  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.985554  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.985883  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.985918  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.985939  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.987986  310801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 20:34:53.989183  310801 addons.go:510] duration metric: took 790.33722ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 20:34:53.989228  310801 start.go:246] waiting for cluster config update ...
	I1205 20:34:53.989258  310801 start.go:255] writing updated cluster config ...
	I1205 20:34:53.991007  310801 out.go:201] 
	I1205 20:34:53.992546  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.992653  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.994377  310801 out.go:177] * Starting "ha-689539-m02" control-plane node in "ha-689539" cluster
	I1205 20:34:53.995700  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:53.995727  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:53.995849  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:53.995862  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:53.995934  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.996107  310801 start.go:360] acquireMachinesLock for ha-689539-m02: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:53.996153  310801 start.go:364] duration metric: took 23.521µs to acquireMachinesLock for "ha-689539-m02"
	I1205 20:34:53.996172  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.996237  310801 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 20:34:53.998557  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:53.998670  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.998722  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:54.015008  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1205 20:34:54.015521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:54.016066  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:54.016091  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:54.016507  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:54.016709  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:34:54.016933  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:34:54.017199  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:54.017236  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:54.017303  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:54.017352  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017375  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017449  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:54.017479  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017495  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017521  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:54.017533  310801 main.go:141] libmachine: (ha-689539-m02) Calling .PreCreateCheck
	I1205 20:34:54.017789  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:34:54.018296  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:54.018313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .Create
	I1205 20:34:54.018519  310801 main.go:141] libmachine: (ha-689539-m02) Creating KVM machine...
	I1205 20:34:54.019903  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing default KVM network
	I1205 20:34:54.020058  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing private KVM network mk-ha-689539
	I1205 20:34:54.020167  310801 main.go:141] libmachine: (ha-689539-m02) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.020190  310801 main.go:141] libmachine: (ha-689539-m02) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:54.020273  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.020159  311180 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.020403  310801 main.go:141] libmachine: (ha-689539-m02) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:54.317847  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.317662  311180 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa...
	I1205 20:34:54.529086  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.528946  311180 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk...
	I1205 20:34:54.529124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing magic tar header
	I1205 20:34:54.529140  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing SSH key tar header
	I1205 20:34:54.529158  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.529070  311180 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.529265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02
	I1205 20:34:54.529295  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 (perms=drwx------)
	I1205 20:34:54.529308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:54.529327  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.529337  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:54.529349  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:54.529360  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:54.529372  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:54.529383  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home
	I1205 20:34:54.529398  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:54.529416  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:54.529429  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:54.529443  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:54.529454  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Skipping /home - not owner
	I1205 20:34:54.529461  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:54.530562  310801 main.go:141] libmachine: (ha-689539-m02) define libvirt domain using xml: 
	I1205 20:34:54.530603  310801 main.go:141] libmachine: (ha-689539-m02) <domain type='kvm'>
	I1205 20:34:54.530622  310801 main.go:141] libmachine: (ha-689539-m02)   <name>ha-689539-m02</name>
	I1205 20:34:54.530636  310801 main.go:141] libmachine: (ha-689539-m02)   <memory unit='MiB'>2200</memory>
	I1205 20:34:54.530645  310801 main.go:141] libmachine: (ha-689539-m02)   <vcpu>2</vcpu>
	I1205 20:34:54.530652  310801 main.go:141] libmachine: (ha-689539-m02)   <features>
	I1205 20:34:54.530662  310801 main.go:141] libmachine: (ha-689539-m02)     <acpi/>
	I1205 20:34:54.530667  310801 main.go:141] libmachine: (ha-689539-m02)     <apic/>
	I1205 20:34:54.530672  310801 main.go:141] libmachine: (ha-689539-m02)     <pae/>
	I1205 20:34:54.530676  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.530682  310801 main.go:141] libmachine: (ha-689539-m02)   </features>
	I1205 20:34:54.530687  310801 main.go:141] libmachine: (ha-689539-m02)   <cpu mode='host-passthrough'>
	I1205 20:34:54.530691  310801 main.go:141] libmachine: (ha-689539-m02)   
	I1205 20:34:54.530700  310801 main.go:141] libmachine: (ha-689539-m02)   </cpu>
	I1205 20:34:54.530705  310801 main.go:141] libmachine: (ha-689539-m02)   <os>
	I1205 20:34:54.530714  310801 main.go:141] libmachine: (ha-689539-m02)     <type>hvm</type>
	I1205 20:34:54.530720  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='cdrom'/>
	I1205 20:34:54.530727  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='hd'/>
	I1205 20:34:54.530733  310801 main.go:141] libmachine: (ha-689539-m02)     <bootmenu enable='no'/>
	I1205 20:34:54.530737  310801 main.go:141] libmachine: (ha-689539-m02)   </os>
	I1205 20:34:54.530742  310801 main.go:141] libmachine: (ha-689539-m02)   <devices>
	I1205 20:34:54.530747  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='cdrom'>
	I1205 20:34:54.530762  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/boot2docker.iso'/>
	I1205 20:34:54.530777  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:54.530792  310801 main.go:141] libmachine: (ha-689539-m02)       <readonly/>
	I1205 20:34:54.530801  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530835  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='disk'>
	I1205 20:34:54.530866  310801 main.go:141] libmachine: (ha-689539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:54.530888  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk'/>
	I1205 20:34:54.530900  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hda' bus='virtio'/>
	I1205 20:34:54.530910  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530920  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.530930  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='mk-ha-689539'/>
	I1205 20:34:54.530940  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.530948  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.530963  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.531000  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='default'/>
	I1205 20:34:54.531021  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.531046  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.531060  310801 main.go:141] libmachine: (ha-689539-m02)     <serial type='pty'>
	I1205 20:34:54.531070  310801 main.go:141] libmachine: (ha-689539-m02)       <target port='0'/>
	I1205 20:34:54.531080  310801 main.go:141] libmachine: (ha-689539-m02)     </serial>
	I1205 20:34:54.531092  310801 main.go:141] libmachine: (ha-689539-m02)     <console type='pty'>
	I1205 20:34:54.531101  310801 main.go:141] libmachine: (ha-689539-m02)       <target type='serial' port='0'/>
	I1205 20:34:54.531113  310801 main.go:141] libmachine: (ha-689539-m02)     </console>
	I1205 20:34:54.531124  310801 main.go:141] libmachine: (ha-689539-m02)     <rng model='virtio'>
	I1205 20:34:54.531149  310801 main.go:141] libmachine: (ha-689539-m02)       <backend model='random'>/dev/random</backend>
	I1205 20:34:54.531171  310801 main.go:141] libmachine: (ha-689539-m02)     </rng>
	I1205 20:34:54.531193  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531210  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531219  310801 main.go:141] libmachine: (ha-689539-m02)   </devices>
	I1205 20:34:54.531228  310801 main.go:141] libmachine: (ha-689539-m02) </domain>
	I1205 20:34:54.531253  310801 main.go:141] libmachine: (ha-689539-m02) 
	I1205 20:34:54.538318  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:db:6c:41 in network default
	I1205 20:34:54.538874  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring networks are active...
	I1205 20:34:54.538905  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:54.539900  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network default is active
	I1205 20:34:54.540256  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network mk-ha-689539 is active
	I1205 20:34:54.540685  310801 main.go:141] libmachine: (ha-689539-m02) Getting domain xml...
	I1205 20:34:54.541702  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:55.795769  310801 main.go:141] libmachine: (ha-689539-m02) Waiting to get IP...
	I1205 20:34:55.796704  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:55.797107  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:55.797137  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:55.797080  311180 retry.go:31] will retry after 248.666925ms: waiting for machine to come up
	I1205 20:34:56.047775  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.048308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.048345  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.048228  311180 retry.go:31] will retry after 275.164049ms: waiting for machine to come up
	I1205 20:34:56.324858  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.325265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.325293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.325230  311180 retry.go:31] will retry after 471.642082ms: waiting for machine to come up
	I1205 20:34:56.798901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.799411  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.799445  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.799337  311180 retry.go:31] will retry after 372.986986ms: waiting for machine to come up
	I1205 20:34:57.173842  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.174284  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.174315  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.174243  311180 retry.go:31] will retry after 491.328215ms: waiting for machine to come up
	I1205 20:34:57.666917  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.667363  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.667388  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.667340  311180 retry.go:31] will retry after 701.698041ms: waiting for machine to come up
	I1205 20:34:58.370293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:58.370782  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:58.370813  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:58.370725  311180 retry.go:31] will retry after 750.048133ms: waiting for machine to come up
	I1205 20:34:59.121998  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:59.122452  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:59.122482  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:59.122416  311180 retry.go:31] will retry after 1.373917427s: waiting for machine to come up
	I1205 20:35:00.498001  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:00.498527  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:00.498564  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:00.498461  311180 retry.go:31] will retry after 1.273603268s: waiting for machine to come up
	I1205 20:35:01.773536  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:01.774024  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:01.774055  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:01.773976  311180 retry.go:31] will retry after 1.863052543s: waiting for machine to come up
	I1205 20:35:03.640228  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:03.640744  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:03.640780  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:03.640681  311180 retry.go:31] will retry after 2.126872214s: waiting for machine to come up
	I1205 20:35:05.768939  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:05.769465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:05.769495  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:05.769419  311180 retry.go:31] will retry after 2.492593838s: waiting for machine to come up
	I1205 20:35:08.265013  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:08.265518  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:08.265557  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:08.265445  311180 retry.go:31] will retry after 4.136586499s: waiting for machine to come up
	I1205 20:35:12.405674  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:12.406165  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:12.406195  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:12.406099  311180 retry.go:31] will retry after 4.175170751s: waiting for machine to come up
	I1205 20:35:16.583008  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583448  310801 main.go:141] libmachine: (ha-689539-m02) Found IP for machine: 192.168.39.224
	I1205 20:35:16.583483  310801 main.go:141] libmachine: (ha-689539-m02) Reserving static IP address...
	I1205 20:35:16.583508  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583773  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "ha-689539-m02", mac: "52:54:00:01:ca:45", ip: "192.168.39.224"} in network mk-ha-689539
	I1205 20:35:16.666774  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:16.666819  310801 main.go:141] libmachine: (ha-689539-m02) Reserved static IP address: 192.168.39.224
	I1205 20:35:16.666833  310801 main.go:141] libmachine: (ha-689539-m02) Waiting for SSH to be available...
	I1205 20:35:16.669680  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.670217  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539
	I1205 20:35:16.670248  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find defined IP address of network mk-ha-689539 interface with MAC address 52:54:00:01:ca:45
	I1205 20:35:16.670412  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:16.670440  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:16.670473  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:16.670490  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:16.670506  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:16.675197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:35:16.675236  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:35:16.675246  310801 main.go:141] libmachine: (ha-689539-m02) DBG | command : exit 0
	I1205 20:35:16.675253  310801 main.go:141] libmachine: (ha-689539-m02) DBG | err     : exit status 255
	I1205 20:35:16.675269  310801 main.go:141] libmachine: (ha-689539-m02) DBG | output  : 
	I1205 20:35:19.675465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:19.678124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678615  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.678646  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678752  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:19.678781  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:19.678817  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:19.678840  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:19.678857  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:19.805836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 20:35:19.806152  310801 main.go:141] libmachine: (ha-689539-m02) KVM machine creation complete!
	I1205 20:35:19.806464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:19.807084  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807474  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:35:19.807492  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetState
	I1205 20:35:19.808787  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:35:19.808804  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:35:19.808811  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:35:19.808818  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.811344  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811714  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.811743  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811928  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.812132  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812422  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.812622  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.812860  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.812871  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:35:19.921262  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:19.921299  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:35:19.921312  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.924600  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925051  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.925075  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925275  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.925497  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925651  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925794  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.925996  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.926221  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.926235  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:35:20.039067  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:35:20.039180  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:35:20.039192  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:35:20.039205  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039552  310801 buildroot.go:166] provisioning hostname "ha-689539-m02"
	I1205 20:35:20.039589  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039855  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.043233  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.043789  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.043820  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.044027  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.044236  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044433  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044659  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.044843  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.045030  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.045042  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m02 && echo "ha-689539-m02" | sudo tee /etc/hostname
	I1205 20:35:20.173519  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m02
	
	I1205 20:35:20.173562  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.176643  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.176967  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.176994  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.177264  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.177464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177721  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177868  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.178085  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.178312  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.178329  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:35:20.299145  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:20.299194  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:35:20.299221  310801 buildroot.go:174] setting up certificates
	I1205 20:35:20.299251  310801 provision.go:84] configureAuth start
	I1205 20:35:20.299278  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.299618  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.302873  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.303234  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303352  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.305836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306274  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.306298  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306450  310801 provision.go:143] copyHostCerts
	I1205 20:35:20.306489  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306536  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:35:20.306547  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306613  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:35:20.306694  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306712  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:35:20.306719  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306743  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:35:20.306790  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306807  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:35:20.306813  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306832  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:35:20.306880  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m02 san=[127.0.0.1 192.168.39.224 ha-689539-m02 localhost minikube]
	I1205 20:35:20.462180  310801 provision.go:177] copyRemoteCerts
	I1205 20:35:20.462244  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:35:20.462273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.465164  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465498  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.465526  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465765  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.465979  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.466125  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.466256  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.552142  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:35:20.552248  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:35:20.577611  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:35:20.577693  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:35:20.602829  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:35:20.602927  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:35:20.629296  310801 provision.go:87] duration metric: took 330.013316ms to configureAuth
	I1205 20:35:20.629334  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:35:20.629554  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:20.629672  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.632608  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633010  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.633046  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633219  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.633418  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633617  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633785  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.634021  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.634203  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.634221  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:35:20.861660  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:35:20.861695  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:35:20.861706  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetURL
	I1205 20:35:20.863182  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using libvirt version 6000000
	I1205 20:35:20.865580  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866002  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.866022  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866305  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:35:20.866329  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:35:20.866337  310801 client.go:171] duration metric: took 26.849092016s to LocalClient.Create
	I1205 20:35:20.866366  310801 start.go:167] duration metric: took 26.849169654s to libmachine.API.Create "ha-689539"
	I1205 20:35:20.866385  310801 start.go:293] postStartSetup for "ha-689539-m02" (driver="kvm2")
	I1205 20:35:20.866396  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:35:20.866415  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:20.866737  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:35:20.866782  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.869117  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869511  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.869539  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869712  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.869922  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.870094  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.870213  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.956165  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:35:20.960554  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:35:20.960593  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:35:20.960663  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:35:20.960745  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:35:20.960756  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:35:20.960845  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:35:20.970171  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:20.993469  310801 start.go:296] duration metric: took 127.065366ms for postStartSetup
	I1205 20:35:20.993548  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:20.994261  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.996956  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997403  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.997431  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997694  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:35:20.997894  310801 start.go:128] duration metric: took 27.001645944s to createHost
	I1205 20:35:20.997947  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.000356  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000768  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.000793  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000932  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.001164  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001372  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001567  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.001800  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:21.002023  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:21.002035  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:35:21.114783  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430921.091468988
	
	I1205 20:35:21.114813  310801 fix.go:216] guest clock: 1733430921.091468988
	I1205 20:35:21.114823  310801 fix.go:229] Guest: 2024-12-05 20:35:21.091468988 +0000 UTC Remote: 2024-12-05 20:35:20.997930274 +0000 UTC m=+72.965807310 (delta=93.538714ms)
	I1205 20:35:21.114853  310801 fix.go:200] guest clock delta is within tolerance: 93.538714ms
	I1205 20:35:21.114861  310801 start.go:83] releasing machines lock for "ha-689539-m02", held for 27.118697006s
	I1205 20:35:21.114886  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.115206  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:21.118066  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.118466  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.118504  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.121045  310801 out.go:177] * Found network options:
	I1205 20:35:21.122608  310801 out.go:177]   - NO_PROXY=192.168.39.220
	W1205 20:35:21.124023  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.124097  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.124832  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125105  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125251  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:35:21.125326  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	W1205 20:35:21.125332  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.125435  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:35:21.125468  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.128474  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128563  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128871  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.128901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129000  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.129022  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129073  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129233  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129232  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129435  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129437  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129634  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129634  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.129803  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.365680  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:35:21.371668  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:35:21.371782  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:35:21.388230  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:35:21.388261  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:35:21.388348  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:35:21.404768  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:35:21.419149  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:35:21.419231  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:35:21.433110  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:35:21.447375  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:35:21.563926  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:35:21.729278  310801 docker.go:233] disabling docker service ...
	I1205 20:35:21.729378  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:35:21.744065  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:35:21.757106  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:35:21.878877  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:35:21.983688  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:35:21.997947  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:35:22.016485  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:35:22.016555  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.027185  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:35:22.027270  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.037892  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.048316  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.059131  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:35:22.075255  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.086233  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.103682  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.114441  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:35:22.124360  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:35:22.124442  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:35:22.138043  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:35:22.147996  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:22.253398  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:35:22.348717  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:35:22.348790  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:35:22.353405  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:35:22.353468  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:35:22.357215  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:35:22.393844  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:35:22.393959  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.422018  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.452780  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:35:22.454193  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:35:22.455398  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:22.458243  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458611  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:22.458649  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458851  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:35:22.463124  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:22.475841  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:35:22.476087  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:22.476420  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.476470  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.492198  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I1205 20:35:22.492793  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.493388  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.493418  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.493835  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.494104  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:35:22.495827  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:22.496123  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.496160  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.512684  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I1205 20:35:22.513289  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.513852  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.513877  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.514257  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.514474  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:22.514658  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.224
	I1205 20:35:22.514672  310801 certs.go:194] generating shared ca certs ...
	I1205 20:35:22.514692  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.514826  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:35:22.514868  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:35:22.514875  310801 certs.go:256] generating profile certs ...
	I1205 20:35:22.514942  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:35:22.514966  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736
	I1205 20:35:22.514982  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.254]
	I1205 20:35:22.799808  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 ...
	I1205 20:35:22.799844  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736: {Name:mk805c9f0c218cfc1a14cc95ce5560d63a919c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800063  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 ...
	I1205 20:35:22.800084  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736: {Name:mk878dc23fa761ab4aecc158abe1405fbc550219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800189  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:35:22.800337  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:35:22.800471  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:35:22.800490  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:35:22.800508  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:35:22.800524  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:35:22.800539  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:35:22.800554  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:35:22.800569  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:35:22.800578  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:35:22.800588  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:35:22.800649  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:35:22.800680  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:35:22.800690  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:35:22.800714  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:35:22.800740  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:35:22.800782  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:35:22.800829  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:22.800856  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:22.800870  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:35:22.800883  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:35:22.800924  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:22.803915  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804323  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:22.804357  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804510  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:22.804779  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:22.804968  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:22.805127  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:22.874336  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:35:22.878799  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:35:22.889481  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:35:22.893603  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:35:22.907201  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:35:22.911129  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:35:22.921562  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:35:22.925468  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:35:22.935462  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:35:22.939312  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:35:22.949250  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:35:22.953120  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:35:22.964047  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:35:22.988860  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:35:23.013850  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:35:23.037874  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:35:23.062975  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:35:23.087802  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:35:23.112226  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:35:23.139642  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:35:23.168141  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:35:23.193470  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:35:23.218935  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:35:23.243452  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:35:23.261775  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:35:23.279011  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:35:23.296521  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:35:23.313399  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:35:23.330608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:35:23.349181  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:35:23.366287  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:35:23.372023  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:35:23.383498  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.387933  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.388026  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.393863  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:35:23.405145  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:35:23.416665  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421806  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421882  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.427892  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:35:23.439291  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:35:23.450645  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455301  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455397  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.461088  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:35:23.473062  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:35:23.477238  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:35:23.477315  310801 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.31.2 crio true true} ...
	I1205 20:35:23.477412  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:35:23.477446  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:35:23.477488  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:35:23.494130  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:35:23.494206  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:35:23.494265  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.504559  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:35:23.504639  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.515268  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 20:35:23.515267  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:35:23.515267  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 20:35:23.515420  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.515485  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.520360  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:35:23.520397  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:35:24.329721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.329837  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.335194  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:35:24.335241  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:35:24.693728  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:24.707996  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.708127  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.712643  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:35:24.712685  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:35:25.030158  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:35:25.039864  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:35:25.056953  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:35:25.074038  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:35:25.090341  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:35:25.094291  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:25.106549  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:25.251421  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:25.281544  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:25.281958  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:25.282025  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:25.298815  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1205 20:35:25.299446  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:25.299916  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:25.299940  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:25.300264  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:25.300471  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:25.300647  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:35:25.300755  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:35:25.300777  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:25.303962  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304378  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:25.304416  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304612  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:25.304845  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:25.305034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:25.305189  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:25.467206  310801 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:25.467286  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I1205 20:35:47.115820  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (21.648499033s)
	I1205 20:35:47.115867  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:35:47.674102  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m02 minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:35:47.783659  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:35:47.899441  310801 start.go:319] duration metric: took 22.598789448s to joinCluster
	I1205 20:35:47.899529  310801 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.899871  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.901544  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.903164  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:48.171147  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:48.196654  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:35:48.197028  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:35:48.197120  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:35:48.197520  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:35:48.197656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.197669  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.197681  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.197693  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.214799  310801 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:35:48.697777  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.697812  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.697824  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.697833  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.703691  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.198191  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.198217  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.198225  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.198229  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.204218  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.698048  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.698079  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.698090  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.698096  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.705663  310801 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:35:50.198629  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.198656  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.198669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.198675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.202111  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:50.202581  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:50.698434  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.698457  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.698465  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.698469  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.702335  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.197943  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.197971  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.197981  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.197985  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.201567  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.698634  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.698668  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.698680  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.698687  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.702470  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.198285  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.198318  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.198331  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.198338  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.202116  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.202820  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:52.697909  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.697940  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.697953  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.697959  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.700998  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.198023  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.198047  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.198056  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.198059  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.201259  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.698438  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.698462  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.698478  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.698482  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.701883  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.198346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.198373  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.198381  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.198386  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.202207  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.203013  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:54.698384  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.698407  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.698415  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.698422  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.703135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:35:55.198075  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.198102  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.198111  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.198116  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.275835  310801 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I1205 20:35:55.698292  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.698327  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.698343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.698347  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.701831  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.197819  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.197847  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.197856  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.197861  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.201202  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.698240  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.698288  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.698299  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.698304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.701586  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.702160  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:57.198590  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.198622  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.198633  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.198638  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.201959  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:57.698128  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.698159  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.698170  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.698175  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.703388  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:58.198316  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.198343  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.198352  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.198357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.201617  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.698669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.698694  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.698706  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.698710  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.702292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.702971  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:59.198697  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.198726  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.198739  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.198747  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.205545  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:35:59.698504  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.698536  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.698560  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.702266  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.198245  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.198270  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.198279  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.198283  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.201787  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.698510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.698544  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.698563  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.701802  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.197953  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.197983  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.197994  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.197999  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.201035  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.201711  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:01.698167  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.698198  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.698210  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.698215  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.701264  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.198110  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.198141  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.198152  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.198157  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.201468  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.698626  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.698659  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.698669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.698675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.701881  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.198737  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.198763  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.198774  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.198779  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.202428  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.202953  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:03.698736  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.698768  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.698780  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.698788  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.702162  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.197743  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.197773  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.197784  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.197791  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.201284  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.698126  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.698155  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.698164  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.698168  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.701888  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.198088  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.198121  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.198131  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.198138  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.201797  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.698476  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.698506  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.698515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.698520  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.701875  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.702580  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:06.198021  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.198061  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.198069  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.198074  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.201540  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.202101  310801 node_ready.go:49] node "ha-689539-m02" has status "Ready":"True"
	I1205 20:36:06.202126  310801 node_ready.go:38] duration metric: took 18.004581739s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:36:06.202140  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:06.202253  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:06.202268  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.202278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.202285  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.206754  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.212677  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.212799  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:36:06.212813  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.212822  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.212827  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.215643  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.216276  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.216293  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.216301  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.216304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.218813  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.219400  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.219422  310801 pod_ready.go:82] duration metric: took 6.710961ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219433  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219519  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:36:06.219530  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.219537  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.219544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.221986  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.222730  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.222744  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.222752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.222757  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.225041  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.225536  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.225559  310801 pod_ready.go:82] duration metric: took 6.118464ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225582  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:36:06.225668  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.225684  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.225696  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.228280  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.228948  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.228962  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.228970  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.228974  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.231708  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.232206  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.232225  310801 pod_ready.go:82] duration metric: took 6.631337ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232234  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232328  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:36:06.232338  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.232347  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.232357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.234717  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.235313  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.235328  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.235336  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.235340  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.237446  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.237958  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.237979  310801 pod_ready.go:82] duration metric: took 5.738833ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.237997  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.398468  310801 request.go:632] Waited for 160.38501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398582  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398592  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.398601  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.398605  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.402334  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.598805  310801 request.go:632] Waited for 195.477134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598897  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598903  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.598911  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.598914  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.602945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.603481  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.603505  310801 pod_ready.go:82] duration metric: took 365.497043ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.603516  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.798685  310801 request.go:632] Waited for 195.084248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798771  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798776  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.798786  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.798792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.802375  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.998825  310801 request.go:632] Waited for 195.407022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998895  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998900  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.998908  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.998913  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.003073  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.003620  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.003641  310801 pod_ready.go:82] duration metric: took 400.118288ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.003652  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.198723  310801 request.go:632] Waited for 194.973944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198815  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198822  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.198834  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.198844  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.202792  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.398908  310801 request.go:632] Waited for 195.413458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.398993  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.399006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.399019  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.399029  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.403088  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.403800  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.403838  310801 pod_ready.go:82] duration metric: took 400.178189ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.403856  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.598771  310801 request.go:632] Waited for 194.816012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598840  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598845  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.598862  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.598869  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.602566  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.798831  310801 request.go:632] Waited for 195.438007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798985  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798998  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.799015  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.799023  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.803171  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.803823  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.803849  310801 pod_ready.go:82] duration metric: took 399.978899ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.803864  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.998893  310801 request.go:632] Waited for 194.90975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.998995  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.999006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.999033  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.999050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.003019  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.198483  310801 request.go:632] Waited for 194.725493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198570  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198580  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.198588  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.198592  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.202279  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.202805  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.202824  310801 pod_ready.go:82] duration metric: took 398.949898ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.202837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.399003  310801 request.go:632] Waited for 196.061371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399102  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399110  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.399126  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.399137  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.404511  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:36:08.598657  310801 request.go:632] Waited for 193.397123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598817  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598829  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.598837  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.598850  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.602654  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.603461  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.603483  310801 pod_ready.go:82] duration metric: took 400.640164ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.603494  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.798579  310801 request.go:632] Waited for 194.963606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798680  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.798692  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.798704  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.802678  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.998854  310801 request.go:632] Waited for 195.447294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998947  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998954  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.998964  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.998970  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.003138  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.003792  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.003821  310801 pod_ready.go:82] duration metric: took 400.319353ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.003837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.198016  310801 request.go:632] Waited for 194.088845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198132  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198145  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.198158  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.198165  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.201958  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.398942  310801 request.go:632] Waited for 196.371567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399033  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.399044  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.399050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.402750  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.403404  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.403436  310801 pod_ready.go:82] duration metric: took 399.590034ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.403451  310801 pod_ready.go:39] duration metric: took 3.201294497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:09.403471  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:09.403551  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:09.418357  310801 api_server.go:72] duration metric: took 21.51878718s to wait for apiserver process to appear ...
	I1205 20:36:09.418390  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:09.418420  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:36:09.425381  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:36:09.425471  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:36:09.425479  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.425488  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.425494  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.426343  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:36:09.426447  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:09.426464  310801 api_server.go:131] duration metric: took 8.067774ms to wait for apiserver health ...
	I1205 20:36:09.426481  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:09.598951  310801 request.go:632] Waited for 172.364571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599030  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.599038  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.599042  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.603442  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.609057  310801 system_pods.go:59] 17 kube-system pods found
	I1205 20:36:09.609099  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:09.609107  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:09.609113  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:09.609121  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:09.609126  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:09.609130  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:09.609136  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:09.609142  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:09.609149  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:09.609159  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:09.609165  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:09.609174  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:09.609180  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:09.609186  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:09.609192  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:09.609200  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:09.609207  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:09.609218  310801 system_pods.go:74] duration metric: took 182.726007ms to wait for pod list to return data ...
	I1205 20:36:09.609232  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:09.798716  310801 request.go:632] Waited for 189.385773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798784  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798789  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.798797  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.798800  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.803434  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.803720  310801 default_sa.go:45] found service account: "default"
	I1205 20:36:09.803742  310801 default_sa.go:55] duration metric: took 194.50158ms for default service account to be created ...
	I1205 20:36:09.803755  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:09.998902  310801 request.go:632] Waited for 195.036574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998984  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998992  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.999004  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.999012  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.005341  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:36:10.009685  310801 system_pods.go:86] 17 kube-system pods found
	I1205 20:36:10.009721  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:10.009733  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:10.009739  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:10.009745  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:10.009751  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:10.009756  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:10.009760  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:10.009770  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:10.009774  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:10.009778  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:10.009782  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:10.009786  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:10.009789  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:10.009794  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:10.009797  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:10.009802  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:10.009805  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:10.009814  310801 system_pods.go:126] duration metric: took 206.05156ms to wait for k8s-apps to be running ...
	I1205 20:36:10.009825  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:10.009874  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:10.025329  310801 system_svc.go:56] duration metric: took 15.491147ms WaitForService to wait for kubelet
	I1205 20:36:10.025382  310801 kubeadm.go:582] duration metric: took 22.125819174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:10.025410  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:10.199031  310801 request.go:632] Waited for 173.477614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199134  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199143  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:10.199154  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:10.199159  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.202963  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:10.203807  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203836  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203848  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203851  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203855  310801 node_conditions.go:105] duration metric: took 178.44033ms to run NodePressure ...
	I1205 20:36:10.203870  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:36:10.203895  310801 start.go:255] writing updated cluster config ...
	I1205 20:36:10.205987  310801 out.go:201] 
	I1205 20:36:10.207492  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:10.207614  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.209270  310801 out.go:177] * Starting "ha-689539-m03" control-plane node in "ha-689539" cluster
	I1205 20:36:10.210621  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:36:10.210654  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:36:10.210766  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:36:10.210778  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:36:10.210880  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.211060  310801 start.go:360] acquireMachinesLock for ha-689539-m03: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:36:10.211107  310801 start.go:364] duration metric: took 26.599µs to acquireMachinesLock for "ha-689539-m03"
	I1205 20:36:10.211127  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:10.211224  310801 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 20:36:10.213644  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:36:10.213846  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:10.213895  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:10.230607  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 20:36:10.231136  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:10.231708  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:10.231730  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:10.232163  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:10.232486  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:10.232681  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:10.232898  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:36:10.232939  310801 client.go:168] LocalClient.Create starting
	I1205 20:36:10.232979  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:36:10.233029  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233052  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233142  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:36:10.233176  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233191  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233315  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:36:10.233332  310801 main.go:141] libmachine: (ha-689539-m03) Calling .PreCreateCheck
	I1205 20:36:10.233549  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:10.234493  310801 main.go:141] libmachine: Creating machine...
	I1205 20:36:10.234513  310801 main.go:141] libmachine: (ha-689539-m03) Calling .Create
	I1205 20:36:10.234674  310801 main.go:141] libmachine: (ha-689539-m03) Creating KVM machine...
	I1205 20:36:10.236332  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing default KVM network
	I1205 20:36:10.236451  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing private KVM network mk-ha-689539
	I1205 20:36:10.236656  310801 main.go:141] libmachine: (ha-689539-m03) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.236685  310801 main.go:141] libmachine: (ha-689539-m03) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:36:10.236729  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.236616  311584 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.236870  310801 main.go:141] libmachine: (ha-689539-m03) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:36:10.551771  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.551634  311584 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa...
	I1205 20:36:10.671521  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671352  311584 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk...
	I1205 20:36:10.671562  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing magic tar header
	I1205 20:36:10.671575  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing SSH key tar header
	I1205 20:36:10.671584  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671500  311584 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.671596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03
	I1205 20:36:10.671680  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 (perms=drwx------)
	I1205 20:36:10.671707  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:36:10.671718  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:36:10.671731  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.671740  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:36:10.671749  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:36:10.671759  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:36:10.671770  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home
	I1205 20:36:10.671781  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Skipping /home - not owner
	I1205 20:36:10.671795  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:36:10.671811  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:36:10.671827  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:36:10.671837  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:36:10.671843  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:10.672929  310801 main.go:141] libmachine: (ha-689539-m03) define libvirt domain using xml: 
	I1205 20:36:10.672953  310801 main.go:141] libmachine: (ha-689539-m03) <domain type='kvm'>
	I1205 20:36:10.672970  310801 main.go:141] libmachine: (ha-689539-m03)   <name>ha-689539-m03</name>
	I1205 20:36:10.673070  310801 main.go:141] libmachine: (ha-689539-m03)   <memory unit='MiB'>2200</memory>
	I1205 20:36:10.673100  310801 main.go:141] libmachine: (ha-689539-m03)   <vcpu>2</vcpu>
	I1205 20:36:10.673109  310801 main.go:141] libmachine: (ha-689539-m03)   <features>
	I1205 20:36:10.673135  310801 main.go:141] libmachine: (ha-689539-m03)     <acpi/>
	I1205 20:36:10.673151  310801 main.go:141] libmachine: (ha-689539-m03)     <apic/>
	I1205 20:36:10.673157  310801 main.go:141] libmachine: (ha-689539-m03)     <pae/>
	I1205 20:36:10.673164  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673174  310801 main.go:141] libmachine: (ha-689539-m03)   </features>
	I1205 20:36:10.673181  310801 main.go:141] libmachine: (ha-689539-m03)   <cpu mode='host-passthrough'>
	I1205 20:36:10.673187  310801 main.go:141] libmachine: (ha-689539-m03)   
	I1205 20:36:10.673192  310801 main.go:141] libmachine: (ha-689539-m03)   </cpu>
	I1205 20:36:10.673197  310801 main.go:141] libmachine: (ha-689539-m03)   <os>
	I1205 20:36:10.673201  310801 main.go:141] libmachine: (ha-689539-m03)     <type>hvm</type>
	I1205 20:36:10.673243  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='cdrom'/>
	I1205 20:36:10.673298  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='hd'/>
	I1205 20:36:10.673335  310801 main.go:141] libmachine: (ha-689539-m03)     <bootmenu enable='no'/>
	I1205 20:36:10.673362  310801 main.go:141] libmachine: (ha-689539-m03)   </os>
	I1205 20:36:10.673384  310801 main.go:141] libmachine: (ha-689539-m03)   <devices>
	I1205 20:36:10.673401  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='cdrom'>
	I1205 20:36:10.673424  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/boot2docker.iso'/>
	I1205 20:36:10.673445  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hdc' bus='scsi'/>
	I1205 20:36:10.673458  310801 main.go:141] libmachine: (ha-689539-m03)       <readonly/>
	I1205 20:36:10.673469  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673485  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='disk'>
	I1205 20:36:10.673499  310801 main.go:141] libmachine: (ha-689539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:36:10.673516  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk'/>
	I1205 20:36:10.673532  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hda' bus='virtio'/>
	I1205 20:36:10.673544  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673556  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673569  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='mk-ha-689539'/>
	I1205 20:36:10.673579  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673592  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673600  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673612  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='default'/>
	I1205 20:36:10.673625  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673635  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673648  310801 main.go:141] libmachine: (ha-689539-m03)     <serial type='pty'>
	I1205 20:36:10.673660  310801 main.go:141] libmachine: (ha-689539-m03)       <target port='0'/>
	I1205 20:36:10.673672  310801 main.go:141] libmachine: (ha-689539-m03)     </serial>
	I1205 20:36:10.673682  310801 main.go:141] libmachine: (ha-689539-m03)     <console type='pty'>
	I1205 20:36:10.673695  310801 main.go:141] libmachine: (ha-689539-m03)       <target type='serial' port='0'/>
	I1205 20:36:10.673711  310801 main.go:141] libmachine: (ha-689539-m03)     </console>
	I1205 20:36:10.673724  310801 main.go:141] libmachine: (ha-689539-m03)     <rng model='virtio'>
	I1205 20:36:10.673736  310801 main.go:141] libmachine: (ha-689539-m03)       <backend model='random'>/dev/random</backend>
	I1205 20:36:10.673747  310801 main.go:141] libmachine: (ha-689539-m03)     </rng>
	I1205 20:36:10.673756  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673766  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673776  310801 main.go:141] libmachine: (ha-689539-m03)   </devices>
	I1205 20:36:10.673790  310801 main.go:141] libmachine: (ha-689539-m03) </domain>
	I1205 20:36:10.673800  310801 main.go:141] libmachine: (ha-689539-m03) 
	I1205 20:36:10.681042  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:ee:34:51 in network default
	I1205 20:36:10.681639  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring networks are active...
	I1205 20:36:10.681669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:10.682561  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network default is active
	I1205 20:36:10.682898  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network mk-ha-689539 is active
	I1205 20:36:10.683183  310801 main.go:141] libmachine: (ha-689539-m03) Getting domain xml...
	I1205 20:36:10.684006  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:11.968725  310801 main.go:141] libmachine: (ha-689539-m03) Waiting to get IP...
	I1205 20:36:11.969610  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:11.970152  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:11.970185  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:11.970125  311584 retry.go:31] will retry after 234.218675ms: waiting for machine to come up
	I1205 20:36:12.205669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.206261  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.206294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.206205  311584 retry.go:31] will retry after 248.695417ms: waiting for machine to come up
	I1205 20:36:12.456801  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.457402  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.457438  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.457352  311584 retry.go:31] will retry after 446.513744ms: waiting for machine to come up
	I1205 20:36:12.906122  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.906634  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.906661  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.906574  311584 retry.go:31] will retry after 535.02916ms: waiting for machine to come up
	I1205 20:36:13.443469  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:13.443918  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:13.443943  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:13.443872  311584 retry.go:31] will retry after 557.418366ms: waiting for machine to come up
	I1205 20:36:14.002733  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.003294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.003322  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.003249  311584 retry.go:31] will retry after 653.304587ms: waiting for machine to come up
	I1205 20:36:14.658664  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.659072  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.659104  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.659017  311584 retry.go:31] will retry after 755.842871ms: waiting for machine to come up
	I1205 20:36:15.416424  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:15.416833  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:15.416859  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:15.416766  311584 retry.go:31] will retry after 1.249096202s: waiting for machine to come up
	I1205 20:36:16.666996  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:16.667456  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:16.667487  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:16.667406  311584 retry.go:31] will retry after 1.829752255s: waiting for machine to come up
	I1205 20:36:18.499154  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:18.499722  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:18.499754  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:18.499656  311584 retry.go:31] will retry after 2.088301292s: waiting for machine to come up
	I1205 20:36:20.590033  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:20.590599  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:20.590952  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:20.590835  311584 retry.go:31] will retry after 2.856395806s: waiting for machine to come up
	I1205 20:36:23.448567  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:23.449151  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:23.449196  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:23.449071  311584 retry.go:31] will retry after 2.566118647s: waiting for machine to come up
	I1205 20:36:26.016596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:26.017066  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:26.017103  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:26.017002  311584 retry.go:31] will retry after 3.311993098s: waiting for machine to come up
	I1205 20:36:29.332519  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:29.333028  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:29.333062  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:29.332969  311584 retry.go:31] will retry after 5.069674559s: waiting for machine to come up
	I1205 20:36:34.404036  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404592  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404615  310801 main.go:141] libmachine: (ha-689539-m03) Found IP for machine: 192.168.39.133
	I1205 20:36:34.404628  310801 main.go:141] libmachine: (ha-689539-m03) Reserving static IP address...
	I1205 20:36:34.405246  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find host DHCP lease matching {name: "ha-689539-m03", mac: "52:54:00:39:1e:d2", ip: "192.168.39.133"} in network mk-ha-689539
	I1205 20:36:34.488202  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Getting to WaitForSSH function...
	I1205 20:36:34.488243  310801 main.go:141] libmachine: (ha-689539-m03) Reserved static IP address: 192.168.39.133
	I1205 20:36:34.488263  310801 main.go:141] libmachine: (ha-689539-m03) Waiting for SSH to be available...
	I1205 20:36:34.491165  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491686  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.491716  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491906  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH client type: external
	I1205 20:36:34.491935  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa (-rw-------)
	I1205 20:36:34.491973  310801 main.go:141] libmachine: (ha-689539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:36:34.491988  310801 main.go:141] libmachine: (ha-689539-m03) DBG | About to run SSH command:
	I1205 20:36:34.492018  310801 main.go:141] libmachine: (ha-689539-m03) DBG | exit 0
	I1205 20:36:34.613832  310801 main.go:141] libmachine: (ha-689539-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 20:36:34.614085  310801 main.go:141] libmachine: (ha-689539-m03) KVM machine creation complete!
	I1205 20:36:34.614391  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:34.614932  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615098  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615251  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:36:34.615261  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetState
	I1205 20:36:34.616613  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:36:34.616630  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:36:34.616635  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:36:34.616641  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.618898  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619343  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.619376  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619553  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.619760  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.619916  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.620049  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.620212  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.620459  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.620479  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:36:34.717073  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:34.717099  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:36:34.717108  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.720008  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720375  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.720408  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720627  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.720862  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721027  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721142  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.721315  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.721505  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.721517  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:36:34.822906  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:36:34.822984  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:36:34.822991  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:36:34.823000  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823269  310801 buildroot.go:166] provisioning hostname "ha-689539-m03"
	I1205 20:36:34.823307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823547  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.826120  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826479  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.826516  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826688  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.826881  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.827324  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.827499  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.827512  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m03 && echo "ha-689539-m03" | sudo tee /etc/hostname
	I1205 20:36:34.941581  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m03
	
	I1205 20:36:34.941620  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.944840  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.945268  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945576  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.945808  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946279  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.946488  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.946701  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.946720  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:36:35.058548  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:35.058600  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:36:35.058628  310801 buildroot.go:174] setting up certificates
	I1205 20:36:35.058647  310801 provision.go:84] configureAuth start
	I1205 20:36:35.058666  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:35.059012  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.062020  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062410  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.062436  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062601  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.064649  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065013  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.065056  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065157  310801 provision.go:143] copyHostCerts
	I1205 20:36:35.065216  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065250  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:36:35.065260  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065330  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:36:35.065453  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065483  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:36:35.065487  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065514  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:36:35.065573  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065599  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:36:35.065606  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065628  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:36:35.065689  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m03 san=[127.0.0.1 192.168.39.133 ha-689539-m03 localhost minikube]
	I1205 20:36:35.249027  310801 provision.go:177] copyRemoteCerts
	I1205 20:36:35.249088  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:36:35.249117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.252102  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252464  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.252504  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252651  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.252859  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.253052  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.253206  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.336527  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:36:35.336648  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:36:35.364926  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:36:35.365010  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:36:35.389088  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:36:35.389182  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:36:35.413330  310801 provision.go:87] duration metric: took 354.660436ms to configureAuth
	I1205 20:36:35.413369  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:36:35.413628  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:35.413732  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.416617  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417048  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.417083  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417297  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.417511  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417805  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.417979  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.418155  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.418171  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:36:35.630886  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:36:35.630926  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:36:35.630937  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetURL
	I1205 20:36:35.632212  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using libvirt version 6000000
	I1205 20:36:35.634750  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635203  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.635240  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635427  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:36:35.635448  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:36:35.635459  310801 client.go:171] duration metric: took 25.402508958s to LocalClient.Create
	I1205 20:36:35.635491  310801 start.go:167] duration metric: took 25.402598488s to libmachine.API.Create "ha-689539"
	I1205 20:36:35.635506  310801 start.go:293] postStartSetup for "ha-689539-m03" (driver="kvm2")
	I1205 20:36:35.635522  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:36:35.635550  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.635824  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:36:35.635854  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.638327  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638682  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.638711  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638841  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.639048  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.639222  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.639398  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.716587  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:36:35.720718  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:36:35.720755  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:36:35.720843  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:36:35.720950  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:36:35.720963  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:36:35.721055  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:36:35.730580  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:35.754106  310801 start.go:296] duration metric: took 118.58052ms for postStartSetup
	I1205 20:36:35.754171  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:35.754838  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.757466  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.757836  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.757867  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.758185  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:35.758409  310801 start.go:128] duration metric: took 25.547174356s to createHost
	I1205 20:36:35.758437  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.760535  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.760919  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.760950  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.761090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.761312  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761499  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761662  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.761847  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.762082  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.762095  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:36:35.859212  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430995.835523026
	
	I1205 20:36:35.859238  310801 fix.go:216] guest clock: 1733430995.835523026
	I1205 20:36:35.859249  310801 fix.go:229] Guest: 2024-12-05 20:36:35.835523026 +0000 UTC Remote: 2024-12-05 20:36:35.758424054 +0000 UTC m=+147.726301003 (delta=77.098972ms)
	I1205 20:36:35.859274  310801 fix.go:200] guest clock delta is within tolerance: 77.098972ms
	I1205 20:36:35.859282  310801 start.go:83] releasing machines lock for "ha-689539-m03", held for 25.648163663s
	I1205 20:36:35.859307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.859602  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.862387  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.862741  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.862765  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.864694  310801 out.go:177] * Found network options:
	I1205 20:36:35.865935  310801 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.224
	W1205 20:36:35.866955  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.866981  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.867029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867701  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867901  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.868027  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:36:35.868079  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	W1205 20:36:35.868103  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.868132  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.868211  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:36:35.868237  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.870846  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.870889  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871267  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871290  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871306  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871412  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871420  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871631  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871634  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871849  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.871887  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.872025  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.872048  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:36.107172  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:36:36.113768  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:36:36.113852  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:36:36.130072  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:36:36.130105  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:36:36.130199  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:36:36.146210  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:36:36.161285  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:36:36.161367  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:36:36.177064  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:36:36.191545  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:36:36.311400  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:36:36.466588  310801 docker.go:233] disabling docker service ...
	I1205 20:36:36.466685  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:36:36.482756  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:36:36.496706  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:36:36.652172  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:36:36.763760  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:36:36.778126  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:36:36.798464  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:36:36.798550  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.809701  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:36:36.809789  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.821480  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.833057  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.844011  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:36:36.855643  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.866916  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.884661  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.895900  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:36:36.907780  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:36:36.907872  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:36:36.923847  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:36:36.935618  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:37.050068  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:36:37.145134  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:36:37.145210  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:36:37.149942  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:36:37.150018  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:36:37.153774  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:36:37.191365  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:36:37.191476  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.218944  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.247248  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:36:37.248847  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:36:37.250408  310801 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.224
	I1205 20:36:37.251670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:37.254710  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255219  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:37.255255  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255473  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:36:37.259811  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:37.272313  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:36:37.272621  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:37.272965  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.273029  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.288738  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I1205 20:36:37.289258  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.289795  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.289824  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.290243  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.290461  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:36:37.292309  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:37.292619  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.292658  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.308415  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I1205 20:36:37.308950  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.309550  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.309579  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.309955  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.310189  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:37.310389  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.133
	I1205 20:36:37.310408  310801 certs.go:194] generating shared ca certs ...
	I1205 20:36:37.310434  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.310698  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:36:37.310756  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:36:37.310770  310801 certs.go:256] generating profile certs ...
	I1205 20:36:37.310865  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:36:37.310896  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf
	I1205 20:36:37.310913  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:36:37.437144  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf ...
	I1205 20:36:37.437188  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf: {Name:mk0c5897cd83a4093b7a3399e7e587e00b7a5bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437391  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf ...
	I1205 20:36:37.437408  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf: {Name:mk1d8d484e615bf29a9b64d40295dea265ce443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437485  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:36:37.437626  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:36:37.437756  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:36:37.437772  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:36:37.437788  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:36:37.437801  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:36:37.437813  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:36:37.437826  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:36:37.437841  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:36:37.437853  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:36:37.437864  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:36:37.437944  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:36:37.437979  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:36:37.437990  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:36:37.438014  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:36:37.438035  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:36:37.438056  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:36:37.438094  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:37.438120  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:36:37.438137  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:37.438154  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:36:37.438200  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:37.441695  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442183  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:37.442215  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442405  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:37.442622  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:37.442798  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:37.443004  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:37.518292  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:36:37.523367  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:36:37.534644  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:36:37.538903  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:36:37.550288  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:36:37.554639  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:36:37.564857  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:36:37.569390  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:36:37.579805  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:36:37.583826  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:36:37.594623  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:36:37.598518  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:36:37.609622  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:36:37.635232  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:36:37.659198  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:36:37.684613  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:36:37.709156  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 20:36:37.734432  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:36:37.759134  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:36:37.782683  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:36:37.806069  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:36:37.829365  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:36:37.854671  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:36:37.877683  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:36:37.895648  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:36:37.911843  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:36:37.928819  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:36:37.945608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:36:37.961295  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:36:37.977148  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:36:37.993888  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:36:37.999493  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:36:38.010566  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014911  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014995  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.021306  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:36:38.033265  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:36:38.045021  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049577  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049655  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.055689  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:36:38.066840  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:36:38.077747  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082720  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082788  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.088581  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:36:38.099228  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:36:38.103604  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:36:38.103672  310801 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.2 crio true true} ...
	I1205 20:36:38.103798  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:36:38.103838  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:36:38.103889  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:36:38.119642  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:36:38.119740  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:36:38.119812  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.130177  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:36:38.130245  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:36:38.140783  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140794  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140777  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 20:36:38.140857  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140859  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140888  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:38.158074  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.158135  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:36:38.158086  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:36:38.158177  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:36:38.158206  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:36:38.158247  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.186188  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:36:38.186252  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:36:39.060124  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:36:39.071107  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:36:39.088307  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:36:39.105414  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:36:39.123515  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:36:39.128382  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:39.141817  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:39.272056  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:36:39.288864  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:39.289220  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:39.289280  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:39.306323  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1205 20:36:39.306810  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:39.307385  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:39.307405  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:39.307730  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:39.308000  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:39.308176  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:36:39.308320  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:36:39.308347  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:39.311767  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312246  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:39.312274  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312449  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:39.312636  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:39.312767  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:39.312941  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:39.465515  310801 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:39.465587  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I1205 20:37:01.441014  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (21.975379722s)
	I1205 20:37:01.441134  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:37:02.017063  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m03 minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:37:02.122818  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:37:02.233408  310801 start.go:319] duration metric: took 22.92521337s to joinCluster
	I1205 20:37:02.233514  310801 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:02.233929  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:02.235271  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:02.236630  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:02.508423  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:02.527064  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:37:02.527473  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:37:02.527594  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:37:02.527913  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:02.528026  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:02.528040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:02.528051  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:02.528056  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:02.557537  310801 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1205 20:37:03.028186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.028214  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.028223  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.028228  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.031783  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:03.528844  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.528876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.528889  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.528897  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.532449  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.028344  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.028385  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.028391  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.031602  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.528319  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.528352  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.528375  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.528382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.532891  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:04.534060  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:05.028293  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.028328  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.028339  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.028344  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.032338  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:05.529271  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.529311  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.529323  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.529330  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.533411  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:06.028510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.028536  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.028545  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.028550  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.032362  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:06.529188  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.529215  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.529224  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.529229  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.533150  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.029082  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.029108  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.029117  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.029120  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.033089  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.033768  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:07.528440  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.528471  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.528481  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.528485  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.531953  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.028337  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.028382  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.028395  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.028399  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.031906  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.528836  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.528876  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.528881  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.532443  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.028243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.028270  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.028278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.028286  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.031717  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.528911  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.528939  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.528948  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.528953  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.532309  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.532990  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:10.028349  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.028377  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.028386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.028390  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.031930  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:10.528611  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.528635  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.528645  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.528650  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.532023  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.028888  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.028914  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.028923  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.028928  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.032482  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.528496  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.528521  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.528530  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.528534  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.532719  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:11.533217  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:12.028518  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.028550  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.028559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.028562  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.031616  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:12.528837  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.528873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.528876  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.532925  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:13.028348  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.028382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.028385  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.031413  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:13.528247  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.528272  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.528282  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.528289  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.531837  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.028958  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.028983  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.028991  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.028994  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.032387  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.032980  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:14.528243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.528268  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.528276  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.528281  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.533135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:15.029156  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.029181  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.029190  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.029194  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.032772  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:15.528703  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.528727  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.528736  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.528740  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.532084  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.029136  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.029163  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.029172  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.029177  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.032419  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.033160  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:16.528509  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.528535  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.528546  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.528553  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.532163  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.028228  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.028256  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.028265  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.028270  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.031611  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.528262  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.528285  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.528294  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.528298  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.532186  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:18.028484  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.028590  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.028610  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.028619  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.032661  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:18.033298  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:18.528576  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.528603  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.528612  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.528622  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.531605  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.028544  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.028570  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.028579  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.028583  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.031945  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.528716  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.528741  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.528752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.528758  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.532114  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.532722  310801 node_ready.go:49] node "ha-689539-m03" has status "Ready":"True"
	I1205 20:37:19.532746  310801 node_ready.go:38] duration metric: took 17.004806597s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:19.532759  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:19.532848  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:19.532862  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.532873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.532877  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.538433  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:19.545193  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.545310  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:37:19.545322  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.545335  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.545343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.548548  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.549181  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.549197  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.549208  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.549214  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.551745  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.552315  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.552336  310801 pod_ready.go:82] duration metric: took 7.114081ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552347  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552426  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:37:19.552436  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.552443  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.552449  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.555044  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.555688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.555703  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.555714  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.555719  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.558507  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.558964  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.558984  310801 pod_ready.go:82] duration metric: took 6.630508ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.558996  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.559064  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:37:19.559075  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.559086  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.559093  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.561702  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.562346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.562362  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.562373  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.562379  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.564859  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.565270  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.565289  310801 pod_ready.go:82] duration metric: took 6.285995ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565301  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565364  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:37:19.565376  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.565386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.565394  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.567843  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.568351  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:19.568369  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.568381  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.568386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.570730  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.571216  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.571233  310801 pod_ready.go:82] duration metric: took 5.925226ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.571242  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.729689  310801 request.go:632] Waited for 158.375356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729775  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729781  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.729791  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.729798  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.733549  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.929796  310801 request.go:632] Waited for 195.378991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929889  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.929915  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.929920  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.933398  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.934088  310801 pod_ready.go:93] pod "etcd-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.934113  310801 pod_ready.go:82] duration metric: took 362.864968ms for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.934133  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.129093  310801 request.go:632] Waited for 194.866664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129174  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129180  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.129188  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.129192  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.132632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.329356  310801 request.go:632] Waited for 195.935231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329441  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329451  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.329463  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.329476  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.333292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.333939  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.333972  310801 pod_ready.go:82] duration metric: took 399.826342ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.333988  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.529058  310801 request.go:632] Waited for 194.978446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529147  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529166  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.529197  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.529204  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.532832  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.729074  310801 request.go:632] Waited for 195.37241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729139  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729144  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.729153  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.729156  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.733037  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.733831  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.733861  310801 pod_ready.go:82] duration metric: took 399.862982ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.733880  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.928790  310801 request.go:632] Waited for 194.758856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928868  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.928884  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.928894  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.931768  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:21.128920  310801 request.go:632] Waited for 196.30741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129013  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129018  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.129026  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.129030  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.132989  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.133733  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.133764  310801 pod_ready.go:82] duration metric: took 399.87672ms for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.133777  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.329719  310801 request.go:632] Waited for 195.840899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329822  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329829  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.329840  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.329848  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.335472  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:21.529593  310801 request.go:632] Waited for 193.3652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529700  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.529710  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.529721  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.533118  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.533743  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.533773  310801 pod_ready.go:82] duration metric: took 399.989891ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.533788  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.729770  310801 request.go:632] Waited for 195.887392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729855  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729863  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.729871  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.729877  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.733541  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.929705  310801 request.go:632] Waited for 195.397002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929774  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.929787  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.929792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.933945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:21.935117  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.935147  310801 pod_ready.go:82] duration metric: took 401.346008ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.935163  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.129158  310801 request.go:632] Waited for 193.90126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129263  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129281  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.129291  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.129295  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.132774  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.329309  310801 request.go:632] Waited for 195.820597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329371  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329397  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.329412  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.329417  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.332841  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.336218  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.336243  310801 pod_ready.go:82] duration metric: took 401.071031ms for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.336259  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.528770  310801 request.go:632] Waited for 192.411741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528833  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528838  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.528846  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.528850  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.531900  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.729073  310801 request.go:632] Waited for 196.313572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729196  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.729206  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.729212  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.732421  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.733074  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.733099  310801 pod_ready.go:82] duration metric: took 396.833211ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.733111  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.929342  310801 request.go:632] Waited for 196.122694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929410  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929416  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.929425  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.929430  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.932878  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.129758  310801 request.go:632] Waited for 196.113609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129841  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129849  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.129861  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.129874  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.133246  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.133786  310801 pod_ready.go:93] pod "kube-proxy-dktwc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.133805  310801 pod_ready.go:82] duration metric: took 400.688784ms for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.133815  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.329685  310801 request.go:632] Waited for 195.763713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329769  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.329788  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.329795  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.333599  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.528890  310801 request.go:632] Waited for 194.302329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528951  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528955  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.528966  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.528973  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.533840  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:23.534667  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.534691  310801 pod_ready.go:82] duration metric: took 400.868432ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.534705  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.728815  310801 request.go:632] Waited for 194.018306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728888  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.728896  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.728900  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.732452  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.929580  310801 request.go:632] Waited for 196.394135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929653  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929659  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.929667  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.929672  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.933364  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.934147  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.934174  310801 pod_ready.go:82] duration metric: took 399.459723ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.934191  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.129685  310801 request.go:632] Waited for 195.380858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129776  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129789  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.129800  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.129811  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.133305  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.329438  310801 request.go:632] Waited for 195.320628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329517  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329525  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.329544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.329550  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.333177  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.333763  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.333790  310801 pod_ready.go:82] duration metric: took 399.589908ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.333806  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.528866  310801 request.go:632] Waited for 194.951078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528969  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528982  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.528997  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.529004  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.532632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.729734  310801 request.go:632] Waited for 196.398947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729824  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729835  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.729847  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.729855  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.733450  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.734057  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.734085  310801 pod_ready.go:82] duration metric: took 400.271075ms for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.734104  310801 pod_ready.go:39] duration metric: took 5.201330389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:24.734128  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:24.734202  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:24.752010  310801 api_server.go:72] duration metric: took 22.518451158s to wait for apiserver process to appear ...
	I1205 20:37:24.752054  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:24.752086  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:37:24.756435  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:37:24.756538  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:37:24.756551  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.756561  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.756569  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.757464  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:37:24.757533  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:24.757548  310801 api_server.go:131] duration metric: took 5.486922ms to wait for apiserver health ...
	I1205 20:37:24.757559  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:24.928965  310801 request.go:632] Waited for 171.296323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929035  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.929049  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.929054  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.935151  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:24.941691  310801 system_pods.go:59] 24 kube-system pods found
	I1205 20:37:24.941733  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:24.941739  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:24.941742  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:24.941746  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:24.941752  310801 system_pods.go:61] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:24.941756  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:24.941759  310801 system_pods.go:61] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:24.941763  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:24.941766  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:24.941770  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:24.941815  310801 system_pods.go:61] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:24.941833  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:24.941841  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:24.941847  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:24.941854  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:24.941860  310801 system_pods.go:61] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:24.941869  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:24.941875  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:24.941883  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:24.941889  310801 system_pods.go:61] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:24.941915  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:24.941922  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:24.941930  310801 system_pods.go:61] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:24.941939  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:24.941947  310801 system_pods.go:74] duration metric: took 184.37937ms to wait for pod list to return data ...
	I1205 20:37:24.941962  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:25.129425  310801 request.go:632] Waited for 187.3488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129501  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129507  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.129515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.129519  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.133730  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.133919  310801 default_sa.go:45] found service account: "default"
	I1205 20:37:25.133941  310801 default_sa.go:55] duration metric: took 191.967731ms for default service account to be created ...
	I1205 20:37:25.133958  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:25.329286  310801 request.go:632] Waited for 195.223367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329372  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329380  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.329392  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.329406  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.335635  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:25.341932  310801 system_pods.go:86] 24 kube-system pods found
	I1205 20:37:25.341974  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:25.341980  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:25.341986  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:25.341990  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:25.341993  310801 system_pods.go:89] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:25.341996  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:25.342000  310801 system_pods.go:89] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:25.342003  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:25.342008  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:25.342011  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:25.342015  310801 system_pods.go:89] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:25.342018  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:25.342022  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:25.342025  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:25.342029  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:25.342035  310801 system_pods.go:89] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:25.342039  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:25.342043  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:25.342047  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:25.342053  310801 system_pods.go:89] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:25.342056  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:25.342059  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:25.342063  310801 system_pods.go:89] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:25.342067  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:25.342077  310801 system_pods.go:126] duration metric: took 208.11212ms to wait for k8s-apps to be running ...
	I1205 20:37:25.342087  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:25.342141  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:25.359925  310801 system_svc.go:56] duration metric: took 17.820163ms WaitForService to wait for kubelet
	I1205 20:37:25.359969  310801 kubeadm.go:582] duration metric: took 23.126420152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:25.359998  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:25.529464  310801 request.go:632] Waited for 169.34708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529531  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529543  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.529553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.529558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.534297  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.535249  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535281  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535294  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535298  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535302  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535306  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535318  310801 node_conditions.go:105] duration metric: took 175.313275ms to run NodePressure ...
	I1205 20:37:25.535339  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:37:25.535367  310801 start.go:255] writing updated cluster config ...
	I1205 20:37:25.535725  310801 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:25.590118  310801 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:25.592310  310801 out.go:177] * Done! kubectl is now configured to use "ha-689539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.530045610Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431266530022849,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e20e0a68-9192-4e31-9e1a-9fe060454fb2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.530493070Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1742eaa-1638-4554-a94c-7c03b2486754 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.530561816Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1742eaa-1638-4554-a94c-7c03b2486754 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.530803000Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1742eaa-1638-4554-a94c-7c03b2486754 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.567670065Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ad7f3a8a-bbbf-4052-8362-b949659f97ff name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.567757833Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ad7f3a8a-bbbf-4052-8362-b949659f97ff name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.569125334Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9dd88625-00a3-4ebe-b2b9-644984c60b02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.569629151Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431266569599509,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9dd88625-00a3-4ebe-b2b9-644984c60b02 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.570301289Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1d51cda7-e3b9-4223-8bb3-61ffae374ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.570368099Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1d51cda7-e3b9-4223-8bb3-61ffae374ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.570592914Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1d51cda7-e3b9-4223-8bb3-61ffae374ad0 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.614680143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9aee11b9-76df-4724-9010-ed4c1f7367ab name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.614771107Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9aee11b9-76df-4724-9010-ed4c1f7367ab name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.616990189Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fb017e2-94ce-4446-baf2-0e348a8f83e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.617474884Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431266617445483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fb017e2-94ce-4446-baf2-0e348a8f83e3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.618144503Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5afd8d4c-f693-49e0-b277-bf55857985a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.618199696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5afd8d4c-f693-49e0-b277-bf55857985a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.618492046Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5afd8d4c-f693-49e0-b277-bf55857985a6 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.654768696Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e058f383-0d44-430c-a9a1-c5865e051fc6 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.654865468Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e058f383-0d44-430c-a9a1-c5865e051fc6 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.656582438Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f95933e0-ef90-47eb-b911-6d8e43f1ce06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.657634823Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431266657499711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f95933e0-ef90-47eb-b911-6d8e43f1ce06 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.661219756Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8c3ad43e-0da9-4afe-9a14-b66151191e4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.661335909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8c3ad43e-0da9-4afe-9a14-b66151191e4b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:06 ha-689539 crio[658]: time="2024-12-05 20:41:06.661552952Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8c3ad43e-0da9-4afe-9a14-b66151191e4b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	77e0f8ba49070       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a35c5864db38       busybox-7dff88458-qjqvr
	05a6cfcd7e9ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   984c3b3f8fe03       coredns-7c65d6cfc9-4ln9l
	c6007ba446b77       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      5 minutes ago       Running             coredns                   0                   a344cd0e9a251       coredns-7c65d6cfc9-6qhhf
	74e8c78df0a6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Running             storage-provisioner       0                   d7a154f9d8020       storage-provisioner
	0809642e9449b       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   faeac762b1689       kindnet-62qw6
	0a16a5003f863       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   6bc6d79587a62       kube-proxy-9tslx
	4431afbd69d99       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   ae658c6069b44       kube-vip-ha-689539
	1e9238618cdfe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   110f95e5235df       etcd-ha-689539
	2033f56968a9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6058ddd3ee58       kube-scheduler-ha-689539
	cd2211f15ae3c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   f650305b876ca       kube-apiserver-ha-689539
	4a056592a0f93       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   6d5d1a1329844       kube-controller-manager-ha-689539
	
	
	==> coredns [05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc] <==
	[INFO] 10.244.0.4:44188 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002182194s
	[INFO] 10.244.1.2:41292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169551s
	[INFO] 10.244.1.2:38453 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003584311s
	[INFO] 10.244.1.2:36084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201777s
	[INFO] 10.244.1.2:49408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133503s
	[INFO] 10.244.2.2:51533 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117849s
	[INFO] 10.244.2.2:34176 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018539s
	[INFO] 10.244.2.2:43670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178861s
	[INFO] 10.244.2.2:56974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148401s
	[INFO] 10.244.0.4:48841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170335s
	[INFO] 10.244.0.4:43111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409238s
	[INFO] 10.244.0.4:36893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093314s
	[INFO] 10.244.0.4:50555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104324s
	[INFO] 10.244.1.2:43568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116735s
	[INFO] 10.244.1.2:44480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066571s
	[INFO] 10.244.1.2:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058674s
	[INFO] 10.244.2.2:49472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121084s
	[INFO] 10.244.0.4:57046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160079s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119738s
	[INFO] 10.244.1.2:37203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178276s
	[INFO] 10.244.1.2:59196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213381s
	[INFO] 10.244.1.2:41969 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159543s
	[INFO] 10.244.1.2:60294 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120046s
	[INFO] 10.244.2.2:42519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177647s
	[INFO] 10.244.0.4:60229 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056377s
	
	
	==> coredns [c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a] <==
	[INFO] 10.244.0.4:55355 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000054352s
	[INFO] 10.244.1.2:33933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161165s
	[INFO] 10.244.1.2:37174 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884442s
	[INFO] 10.244.1.2:41634 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152882s
	[INFO] 10.244.1.2:60548 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176047s
	[INFO] 10.244.2.2:32947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146675s
	[INFO] 10.244.2.2:60319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949836s
	[INFO] 10.244.2.2:48727 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001337037s
	[INFO] 10.244.2.2:56733 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149582s
	[INFO] 10.244.0.4:58646 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001891441s
	[INFO] 10.244.0.4:55352 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164932s
	[INFO] 10.244.0.4:54745 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100872s
	[INFO] 10.244.0.4:51217 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122097s
	[INFO] 10.244.1.2:52959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137256s
	[INFO] 10.244.2.2:52934 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147111s
	[INFO] 10.244.2.2:34173 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119001s
	[INFO] 10.244.2.2:41909 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126707s
	[INFO] 10.244.0.4:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120087s
	[INFO] 10.244.0.4:35647 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218624s
	[INFO] 10.244.2.2:51797 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211308s
	[INFO] 10.244.2.2:38193 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207361s
	[INFO] 10.244.2.2:55117 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135379s
	[INFO] 10.244.0.4:46265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114618s
	[INFO] 10.244.0.4:43082 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145713s
	[INFO] 10.244.0.4:59763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071668s
	
	
	==> describe nodes <==
	Name:               ha-689539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:40:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-689539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fcfe17cf29247c89ef6261408cdec57
	  System UUID:                3fcfe17c-f292-47c8-9ef6-261408cdec57
	  Boot ID:                    0967c504-1cf1-4d64-84b3-abc762e82552
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qjqvr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 coredns-7c65d6cfc9-4ln9l             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 coredns-7c65d6cfc9-6qhhf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m13s
	  kube-system                 etcd-ha-689539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-62qw6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m13s
	  kube-system                 kube-apiserver-ha-689539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-689539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-9tslx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	  kube-system                 kube-scheduler-ha-689539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-689539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m13s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m12s  kube-proxy       
	  Normal  Starting                 6m17s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m17s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m17s  kubelet          Node ha-689539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m17s  kubelet          Node ha-689539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m17s  kubelet          Node ha-689539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m13s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  NodeReady                5m56s  kubelet          Node ha-689539 status is now: NodeReady
	  Normal  RegisteredNode           5m13s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  RegisteredNode           4m     node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	
	
	Name:               ha-689539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:38:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-689539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2527423e09b7455fb49f08b5007d8aaf
	  System UUID:                2527423e-09b7-455f-b49f-08b5007d8aaf
	  Boot ID:                    693fb661-afc0-4a4b-8d66-7434b8ba3be0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7ss94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	  kube-system                 etcd-ha-689539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m20s
	  kube-system                 kindnet-b7bf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m21s
	  kube-system                 kube-apiserver-ha-689539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m20s
	  kube-system                 kube-controller-manager-ha-689539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m12s
	  kube-system                 kube-proxy-x2grl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-scheduler-ha-689539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m15s
	  kube-system                 kube-vip-ha-689539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m18s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m22s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m22s)  kubelet          Node ha-689539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m22s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m18s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           5m13s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           4m                     node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  NodeNotReady             105s                   node-controller  Node ha-689539-m02 status is now: NodeNotReady
	
	
	Name:               ha-689539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-689539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23c133dbe3f244679269ca86c6b2111d
	  System UUID:                23c133db-e3f2-4467-9269-ca86c6b2111d
	  Boot ID:                    72ade07d-4013-4096-9862-81be930c4b6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ns455                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m41s
	  kube-system                 etcd-ha-689539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m7s
	  kube-system                 kindnet-8kgs2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m9s
	  kube-system                 kube-apiserver-ha-689539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-controller-manager-ha-689539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-proxy-dktwc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-ha-689539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m6s
	  kube-system                 kube-vip-ha-689539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m4s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  4m9s (x8 over 4m9s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m9s (x8 over 4m9s)  kubelet          Node ha-689539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m9s (x7 over 4m9s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m4s                 node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m1s                 node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	
	
	Name:               ha-689539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_38_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    ha-689539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d82a84b2609b470c8ddc16781015ee6d
	  System UUID:                d82a84b2-609b-470c-8ddc-16781015ee6d
	  Boot ID:                    c6aff0b9-eb25-4035-add5-dcc47c5c8348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9xbpp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m2s
	  kube-system                 kube-proxy-kpbrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  3m2s (x2 over 3m3s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x2 over 3m3s)  kubelet          Node ha-689539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x2 over 3m3s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m1s                 node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           2m59s                node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  NodeReady                2m42s                kubelet          Node ha-689539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039465] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.016771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.614002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.712547] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.063478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058841] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.182620] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.134116] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.286058] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.983127] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.083666] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.057216] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.189676] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.088639] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.119203] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.279281] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42] <==
	{"level":"warn","ts":"2024-12-05T20:41:06.589152Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.591274Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.673733Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.691462Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.791415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.940006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.949204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.956660Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.963202Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.966856Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.975134Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.981219Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.987699Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.990971Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.992164Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:06.996381Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.002875Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.009613Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.016461Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.021002Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.025851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.030642Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.038397Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.056680Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:07.091832Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:41:07 up 6 min,  0 users,  load average: 0.26, 0.25, 0.11
	Linux ha-689539 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61] <==
	I1205 20:40:29.972197       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:39.973161       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:39.973209       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:39.973681       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:39.973710       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:40:39.975624       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:39.975653       1 main.go:301] handling current node
	I1205 20:40:39.975666       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:39.975671       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:49.971686       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:49.971811       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:49.972022       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:49.972032       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:40:49.972125       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:49.972132       1 main.go:301] handling current node
	I1205 20:40:49.972143       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:49.972147       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972467       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:59.972574       1 main.go:301] handling current node
	I1205 20:40:59.972604       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:59.972621       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972884       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:59.972920       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:59.973088       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:59.973124       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19] <==
	W1205 20:34:48.005731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220]
	I1205 20:34:48.006729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:34:48.014987       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:34:48.223693       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:34:49.561495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:34:49.580677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:34:49.727059       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:34:53.679365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:34:53.876376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1205 20:37:30.985923       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44596: use of closed network connection
	E1205 20:37:31.179622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44600: use of closed network connection
	E1205 20:37:31.382888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44610: use of closed network connection
	E1205 20:37:31.582068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44622: use of closed network connection
	E1205 20:37:31.774198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44652: use of closed network connection
	E1205 20:37:31.958030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44666: use of closed network connection
	E1205 20:37:32.140428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44686: use of closed network connection
	E1205 20:37:32.322775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44704: use of closed network connection
	E1205 20:37:32.515908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44718: use of closed network connection
	E1205 20:37:32.837161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44756: use of closed network connection
	E1205 20:37:33.022723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44776: use of closed network connection
	E1205 20:37:33.209590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44790: use of closed network connection
	E1205 20:37:33.392904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44808: use of closed network connection
	E1205 20:37:33.581589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44830: use of closed network connection
	E1205 20:37:33.765728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44852: use of closed network connection
	W1205 20:38:58.016885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.220]
	
	
	==> kube-controller-manager [4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2] <==
	I1205 20:38:05.497632       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-689539-m04" podCIDRs=["10.244.3.0/24"]
	I1205 20:38:05.497693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.497786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.524265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.322551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.681995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.924972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.069639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.145190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.229546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.230026       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-689539-m04"
	I1205 20:38:08.272217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:15.550194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:38:25.164347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:26.915918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:36.091312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:39:21.941441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.941592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:39:21.962901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.988464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.390336ms"
	I1205 20:39:21.988772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="153.307µs"
	I1205 20:39:23.353917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:27.137479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	
	
	==> kube-proxy [0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:34:54.543864       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:34:54.553756       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	E1205 20:34:54.553891       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:34:54.586394       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:34:54.586517       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:34:54.586562       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:34:54.589547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:34:54.589875       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:34:54.589968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:34:54.592476       1 config.go:199] "Starting service config controller"
	I1205 20:34:54.594797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:34:54.592516       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:34:54.594853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:34:54.600348       1 config.go:328] "Starting node config controller"
	I1205 20:34:54.601332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:34:54.695425       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:34:54.695636       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:34:54.701955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668] <==
	E1205 20:34:47.293214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.324868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:34:47.324938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.340705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:34:47.340848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.360711       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:34:47.360829       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:34:47.402644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:34:47.402751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.409130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:34:47.409228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.580992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:34:47.581091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:34:49.941328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 20:37:26.487849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.487974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c47c5104-83dc-428d-8ded-5175eff6643c(default/busybox-7dff88458-ns455) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ns455"
	E1205 20:37:26.488011       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" pod="default/busybox-7dff88458-ns455"
	I1205 20:37:26.488039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.529460       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:37:26.531731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" pod="default/busybox-7dff88458-qjqvr"
	I1205 20:37:26.532951       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:38:05.558984       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	E1205 20:38:05.565872       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d09bad-5a47-45ec-b467-0231a40ad9f0(kube-system/kindnet-mqzp5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mqzp5"
	E1205 20:38:05.566103       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" pod="kube-system/kindnet-mqzp5"
	I1205 20:38:05.566218       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	
	
	==> kubelet <==
	Dec 05 20:39:49 ha-689539 kubelet[1297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:39:49 ha-689539 kubelet[1297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:39:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:39:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801882    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801906    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.803793    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.804270    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807394    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807450    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811009    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811103    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812356    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812422    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814301    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814613    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.759293    1297 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816382    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816591    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821073    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821410    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (141.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.406579318s)
ha_test.go:415: expected profile "ha-689539" in json of 'profile list' to have "Degraded" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-689539\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-689539\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":
1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-689539\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.220\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.224\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.133\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.199\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubev
irt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker
\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (1.392183396s)
helpers_test.go:252: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m03_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:34:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:34:08.074114  310801 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:34:08.074261  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074272  310801 out.go:358] Setting ErrFile to fd 2...
	I1205 20:34:08.074277  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074494  310801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:34:08.075118  310801 out.go:352] Setting JSON to false
	I1205 20:34:08.076226  310801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11796,"bootTime":1733419052,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:34:08.076305  310801 start.go:139] virtualization: kvm guest
	I1205 20:34:08.078657  310801 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:34:08.080623  310801 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:34:08.080628  310801 notify.go:220] Checking for updates...
	I1205 20:34:08.083473  310801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:34:08.084883  310801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:08.086219  310801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.087594  310801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:34:08.088859  310801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:34:08.090289  310801 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:34:08.128174  310801 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:34:08.129457  310801 start.go:297] selected driver: kvm2
	I1205 20:34:08.129474  310801 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:34:08.129492  310801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:34:08.130313  310801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.130391  310801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:34:08.148061  310801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:34:08.148119  310801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:34:08.148394  310801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:34:08.148426  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:08.148467  310801 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 20:34:08.148479  310801 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:34:08.148546  310801 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:08.148670  310801 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.150579  310801 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:34:08.152101  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:08.152144  310801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:34:08.152158  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:08.152281  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:08.152296  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:08.152605  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:08.152651  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json: {Name:mk27baab499187c123d1f411d3400f014a73dd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:08.152842  310801 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:08.152881  310801 start.go:364] duration metric: took 21.06µs to acquireMachinesLock for "ha-689539"
	I1205 20:34:08.152908  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:08.152972  310801 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:34:08.154751  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:08.154908  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:08.154972  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:08.170934  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I1205 20:34:08.171495  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:08.172063  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:08.172087  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:08.172451  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:08.172674  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:08.172837  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:08.172996  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:08.173045  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:08.173086  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:08.173121  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173139  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173198  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:08.173225  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173243  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173268  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:08.173282  310801 main.go:141] libmachine: (ha-689539) Calling .PreCreateCheck
	I1205 20:34:08.173629  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:08.174111  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:08.174129  310801 main.go:141] libmachine: (ha-689539) Calling .Create
	I1205 20:34:08.174265  310801 main.go:141] libmachine: (ha-689539) Creating KVM machine...
	I1205 20:34:08.175744  310801 main.go:141] libmachine: (ha-689539) DBG | found existing default KVM network
	I1205 20:34:08.176445  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.176315  310824 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I1205 20:34:08.176491  310801 main.go:141] libmachine: (ha-689539) DBG | created network xml: 
	I1205 20:34:08.176507  310801 main.go:141] libmachine: (ha-689539) DBG | <network>
	I1205 20:34:08.176530  310801 main.go:141] libmachine: (ha-689539) DBG |   <name>mk-ha-689539</name>
	I1205 20:34:08.176545  310801 main.go:141] libmachine: (ha-689539) DBG |   <dns enable='no'/>
	I1205 20:34:08.176564  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176591  310801 main.go:141] libmachine: (ha-689539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:34:08.176606  310801 main.go:141] libmachine: (ha-689539) DBG |     <dhcp>
	I1205 20:34:08.176611  310801 main.go:141] libmachine: (ha-689539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:34:08.176616  310801 main.go:141] libmachine: (ha-689539) DBG |     </dhcp>
	I1205 20:34:08.176621  310801 main.go:141] libmachine: (ha-689539) DBG |   </ip>
	I1205 20:34:08.176666  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176693  310801 main.go:141] libmachine: (ha-689539) DBG | </network>
	I1205 20:34:08.176707  310801 main.go:141] libmachine: (ha-689539) DBG | 
	I1205 20:34:08.181749  310801 main.go:141] libmachine: (ha-689539) DBG | trying to create private KVM network mk-ha-689539 192.168.39.0/24...
	I1205 20:34:08.259729  310801 main.go:141] libmachine: (ha-689539) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.259779  310801 main.go:141] libmachine: (ha-689539) DBG | private KVM network mk-ha-689539 192.168.39.0/24 created
	I1205 20:34:08.259792  310801 main.go:141] libmachine: (ha-689539) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:08.259831  310801 main.go:141] libmachine: (ha-689539) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:08.259902  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.259565  310824 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.570701  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.570509  310824 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa...
	I1205 20:34:08.656946  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656740  310824 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk...
	I1205 20:34:08.656979  310801 main.go:141] libmachine: (ha-689539) DBG | Writing magic tar header
	I1205 20:34:08.656999  310801 main.go:141] libmachine: (ha-689539) DBG | Writing SSH key tar header
	I1205 20:34:08.657012  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656919  310824 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.657032  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539
	I1205 20:34:08.657155  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 (perms=drwx------)
	I1205 20:34:08.657196  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:08.657214  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:08.657237  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:08.657251  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:08.657266  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.657283  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:08.657297  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:08.657313  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:08.657327  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home
	I1205 20:34:08.657340  310801 main.go:141] libmachine: (ha-689539) DBG | Skipping /home - not owner
	I1205 20:34:08.657354  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:08.657370  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:08.657383  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:08.658677  310801 main.go:141] libmachine: (ha-689539) define libvirt domain using xml: 
	I1205 20:34:08.658706  310801 main.go:141] libmachine: (ha-689539) <domain type='kvm'>
	I1205 20:34:08.658718  310801 main.go:141] libmachine: (ha-689539)   <name>ha-689539</name>
	I1205 20:34:08.658725  310801 main.go:141] libmachine: (ha-689539)   <memory unit='MiB'>2200</memory>
	I1205 20:34:08.658735  310801 main.go:141] libmachine: (ha-689539)   <vcpu>2</vcpu>
	I1205 20:34:08.658745  310801 main.go:141] libmachine: (ha-689539)   <features>
	I1205 20:34:08.658752  310801 main.go:141] libmachine: (ha-689539)     <acpi/>
	I1205 20:34:08.658759  310801 main.go:141] libmachine: (ha-689539)     <apic/>
	I1205 20:34:08.658767  310801 main.go:141] libmachine: (ha-689539)     <pae/>
	I1205 20:34:08.658787  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.658823  310801 main.go:141] libmachine: (ha-689539)   </features>
	I1205 20:34:08.658849  310801 main.go:141] libmachine: (ha-689539)   <cpu mode='host-passthrough'>
	I1205 20:34:08.658858  310801 main.go:141] libmachine: (ha-689539)   
	I1205 20:34:08.658863  310801 main.go:141] libmachine: (ha-689539)   </cpu>
	I1205 20:34:08.658869  310801 main.go:141] libmachine: (ha-689539)   <os>
	I1205 20:34:08.658874  310801 main.go:141] libmachine: (ha-689539)     <type>hvm</type>
	I1205 20:34:08.658880  310801 main.go:141] libmachine: (ha-689539)     <boot dev='cdrom'/>
	I1205 20:34:08.658885  310801 main.go:141] libmachine: (ha-689539)     <boot dev='hd'/>
	I1205 20:34:08.658892  310801 main.go:141] libmachine: (ha-689539)     <bootmenu enable='no'/>
	I1205 20:34:08.658896  310801 main.go:141] libmachine: (ha-689539)   </os>
	I1205 20:34:08.658902  310801 main.go:141] libmachine: (ha-689539)   <devices>
	I1205 20:34:08.658909  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='cdrom'>
	I1205 20:34:08.658920  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/boot2docker.iso'/>
	I1205 20:34:08.658932  310801 main.go:141] libmachine: (ha-689539)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:08.658940  310801 main.go:141] libmachine: (ha-689539)       <readonly/>
	I1205 20:34:08.658954  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.658974  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='disk'>
	I1205 20:34:08.658987  310801 main.go:141] libmachine: (ha-689539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:08.659004  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk'/>
	I1205 20:34:08.659016  310801 main.go:141] libmachine: (ha-689539)       <target dev='hda' bus='virtio'/>
	I1205 20:34:08.659054  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.659076  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659087  310801 main.go:141] libmachine: (ha-689539)       <source network='mk-ha-689539'/>
	I1205 20:34:08.659094  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659106  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659117  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659126  310801 main.go:141] libmachine: (ha-689539)       <source network='default'/>
	I1205 20:34:08.659140  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659151  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659160  310801 main.go:141] libmachine: (ha-689539)     <serial type='pty'>
	I1205 20:34:08.659167  310801 main.go:141] libmachine: (ha-689539)       <target port='0'/>
	I1205 20:34:08.659176  310801 main.go:141] libmachine: (ha-689539)     </serial>
	I1205 20:34:08.659185  310801 main.go:141] libmachine: (ha-689539)     <console type='pty'>
	I1205 20:34:08.659196  310801 main.go:141] libmachine: (ha-689539)       <target type='serial' port='0'/>
	I1205 20:34:08.659214  310801 main.go:141] libmachine: (ha-689539)     </console>
	I1205 20:34:08.659224  310801 main.go:141] libmachine: (ha-689539)     <rng model='virtio'>
	I1205 20:34:08.659233  310801 main.go:141] libmachine: (ha-689539)       <backend model='random'>/dev/random</backend>
	I1205 20:34:08.659242  310801 main.go:141] libmachine: (ha-689539)     </rng>
	I1205 20:34:08.659248  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659252  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659260  310801 main.go:141] libmachine: (ha-689539)   </devices>
	I1205 20:34:08.659270  310801 main.go:141] libmachine: (ha-689539) </domain>
	I1205 20:34:08.659282  310801 main.go:141] libmachine: (ha-689539) 
	I1205 20:34:08.664073  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:a3:09:de in network default
	I1205 20:34:08.664657  310801 main.go:141] libmachine: (ha-689539) Ensuring networks are active...
	I1205 20:34:08.664680  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:08.665393  310801 main.go:141] libmachine: (ha-689539) Ensuring network default is active
	I1205 20:34:08.665790  310801 main.go:141] libmachine: (ha-689539) Ensuring network mk-ha-689539 is active
	I1205 20:34:08.666343  310801 main.go:141] libmachine: (ha-689539) Getting domain xml...
	I1205 20:34:08.667190  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:09.889755  310801 main.go:141] libmachine: (ha-689539) Waiting to get IP...
	I1205 20:34:09.890610  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:09.890981  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:09.891034  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:09.890969  310824 retry.go:31] will retry after 284.885869ms: waiting for machine to come up
	I1205 20:34:10.177621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.178156  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.178184  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.178109  310824 retry.go:31] will retry after 378.211833ms: waiting for machine to come up
	I1205 20:34:10.557655  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.558178  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.558212  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.558123  310824 retry.go:31] will retry after 473.788163ms: waiting for machine to come up
	I1205 20:34:11.033830  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.034246  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.034277  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.034195  310824 retry.go:31] will retry after 418.138315ms: waiting for machine to come up
	I1205 20:34:11.453849  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.454287  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.454318  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.454229  310824 retry.go:31] will retry after 720.041954ms: waiting for machine to come up
	I1205 20:34:12.176162  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.176610  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.176635  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.176551  310824 retry.go:31] will retry after 769.230458ms: waiting for machine to come up
	I1205 20:34:12.947323  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.947645  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.947682  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.947615  310824 retry.go:31] will retry after 799.111179ms: waiting for machine to come up
	I1205 20:34:13.748171  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:13.748640  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:13.748669  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:13.748592  310824 retry.go:31] will retry after 1.052951937s: waiting for machine to come up
	I1205 20:34:14.802913  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:14.803309  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:14.803340  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:14.803262  310824 retry.go:31] will retry after 1.685899285s: waiting for machine to come up
	I1205 20:34:16.491286  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:16.491828  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:16.491858  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:16.491779  310824 retry.go:31] will retry after 1.722453601s: waiting for machine to come up
	I1205 20:34:18.215846  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:18.216281  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:18.216316  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:18.216229  310824 retry.go:31] will retry after 1.847118783s: waiting for machine to come up
	I1205 20:34:20.066408  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:20.066971  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:20.067002  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:20.066922  310824 retry.go:31] will retry after 2.216585531s: waiting for machine to come up
	I1205 20:34:22.284845  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:22.285380  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:22.285409  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:22.285296  310824 retry.go:31] will retry after 4.35742756s: waiting for machine to come up
	I1205 20:34:26.646498  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:26.646898  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:26.646925  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:26.646863  310824 retry.go:31] will retry after 4.830110521s: waiting for machine to come up
	I1205 20:34:31.481950  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.482551  310801 main.go:141] libmachine: (ha-689539) Found IP for machine: 192.168.39.220
	I1205 20:34:31.482584  310801 main.go:141] libmachine: (ha-689539) Reserving static IP address...
	I1205 20:34:31.482599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has current primary IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.483029  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find host DHCP lease matching {name: "ha-689539", mac: "52:54:00:92:19:fb", ip: "192.168.39.220"} in network mk-ha-689539
	I1205 20:34:31.565523  310801 main.go:141] libmachine: (ha-689539) Reserved static IP address: 192.168.39.220
	I1205 20:34:31.565552  310801 main.go:141] libmachine: (ha-689539) Waiting for SSH to be available...
	I1205 20:34:31.565561  310801 main.go:141] libmachine: (ha-689539) DBG | Getting to WaitForSSH function...
	I1205 20:34:31.568330  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568827  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.568862  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568958  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH client type: external
	I1205 20:34:31.568991  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa (-rw-------)
	I1205 20:34:31.569027  310801 main.go:141] libmachine: (ha-689539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:34:31.569037  310801 main.go:141] libmachine: (ha-689539) DBG | About to run SSH command:
	I1205 20:34:31.569050  310801 main.go:141] libmachine: (ha-689539) DBG | exit 0
	I1205 20:34:31.694133  310801 main.go:141] libmachine: (ha-689539) DBG | SSH cmd err, output: <nil>: 
	I1205 20:34:31.694455  310801 main.go:141] libmachine: (ha-689539) KVM machine creation complete!
	I1205 20:34:31.694719  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:31.695354  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695562  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695749  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:34:31.695765  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:31.697139  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:34:31.697166  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:34:31.697171  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:34:31.697176  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.699900  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700272  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.700328  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700454  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.700642  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700807  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700983  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.701155  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.701416  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.701430  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:34:31.797327  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:31.797354  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:34:31.797363  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.800489  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.800822  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.800853  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.801025  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.801240  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801464  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801591  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.801777  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.801991  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.802002  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:34:31.902674  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:34:31.902768  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:34:31.902779  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:34:31.902787  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903088  310801 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:34:31.903116  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903428  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.906237  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906571  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.906599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906752  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.906940  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907099  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907232  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.907446  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.907634  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.907655  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:34:32.020236  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:34:32.020265  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.023604  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.023912  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.023942  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.024133  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.024345  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024501  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024686  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.024863  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.025085  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.025111  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:34:32.131661  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:32.131696  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:34:32.131742  310801 buildroot.go:174] setting up certificates
	I1205 20:34:32.131755  310801 provision.go:84] configureAuth start
	I1205 20:34:32.131768  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:32.132088  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.135389  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.135787  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.135825  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.136069  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.138588  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.138916  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.138949  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.139086  310801 provision.go:143] copyHostCerts
	I1205 20:34:32.139123  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139178  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:34:32.139206  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139295  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:34:32.139433  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139460  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:34:32.139468  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139515  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:34:32.139597  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139626  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:34:32.139634  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139671  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:34:32.139758  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:34:32.367430  310801 provision.go:177] copyRemoteCerts
	I1205 20:34:32.367531  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:34:32.367565  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.370702  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371025  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.371063  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371206  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.371413  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.371586  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.371717  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.452327  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:34:32.452426  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:34:32.476869  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:34:32.476958  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:34:32.501389  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:34:32.501501  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:34:32.525226  310801 provision.go:87] duration metric: took 393.452946ms to configureAuth
	I1205 20:34:32.525267  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:34:32.525488  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:32.525609  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.528470  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.528833  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.528864  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.529057  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.529285  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529497  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529678  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.529839  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.530046  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.530066  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:34:32.733723  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:34:32.733755  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:34:32.733816  310801 main.go:141] libmachine: (ha-689539) Calling .GetURL
	I1205 20:34:32.735231  310801 main.go:141] libmachine: (ha-689539) DBG | Using libvirt version 6000000
	I1205 20:34:32.737329  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737769  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.737804  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737993  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:34:32.738008  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:34:32.738015  310801 client.go:171] duration metric: took 24.564959064s to LocalClient.Create
	I1205 20:34:32.738046  310801 start.go:167] duration metric: took 24.565052554s to libmachine.API.Create "ha-689539"
	I1205 20:34:32.738061  310801 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:34:32.738073  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:34:32.738096  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.738400  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:34:32.738433  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.740621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.740891  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.740921  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.741034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.741256  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.741431  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.741595  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.820810  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:34:32.825193  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:34:32.825227  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:34:32.825326  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:34:32.825428  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:34:32.825442  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:34:32.825556  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:34:32.835549  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:32.859405  310801 start.go:296] duration metric: took 121.327589ms for postStartSetup
	I1205 20:34:32.859464  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:32.860144  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.862916  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863271  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.863303  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863582  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:32.863831  310801 start.go:128] duration metric: took 24.710845565s to createHost
	I1205 20:34:32.863871  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.866291  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866627  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.866656  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866902  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.867141  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867419  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867570  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.867744  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.867965  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.867993  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:34:32.966710  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430872.933221119
	
	I1205 20:34:32.966748  310801 fix.go:216] guest clock: 1733430872.933221119
	I1205 20:34:32.966760  310801 fix.go:229] Guest: 2024-12-05 20:34:32.933221119 +0000 UTC Remote: 2024-12-05 20:34:32.863851557 +0000 UTC m=+24.831728555 (delta=69.369562ms)
	I1205 20:34:32.966789  310801 fix.go:200] guest clock delta is within tolerance: 69.369562ms
	I1205 20:34:32.966794  310801 start.go:83] releasing machines lock for "ha-689539", held for 24.813901478s
	I1205 20:34:32.966815  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.967103  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.970285  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970747  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.970797  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970954  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971526  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971766  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971872  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:34:32.971926  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.972023  310801 ssh_runner.go:195] Run: cat /version.json
	I1205 20:34:32.972052  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.975300  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975606  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975666  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.975696  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975901  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976142  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976160  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.976211  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.976432  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976440  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.976647  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.976668  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976855  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.977003  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:33.059386  310801 ssh_runner.go:195] Run: systemctl --version
	I1205 20:34:33.082247  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:34:33.243513  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:34:33.249633  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:34:33.249718  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:34:33.266578  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:34:33.266607  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:34:33.266691  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:34:33.282457  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:34:33.296831  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:34:33.296976  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:34:33.310872  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:34:33.324245  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:34:33.436767  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:34:33.589248  310801 docker.go:233] disabling docker service ...
	I1205 20:34:33.589369  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:34:33.604397  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:34:33.617678  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:34:33.755936  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:34:33.876879  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:34:33.890218  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:34:33.907910  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:34:33.907992  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.918057  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:34:33.918138  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.928622  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.938873  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.949059  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:34:33.959639  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.970025  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.986937  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.997151  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:34:34.006323  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:34:34.006391  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:34:34.019434  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:34:34.029027  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:34.156535  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:34:34.246656  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:34:34.246735  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:34:34.251273  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:34:34.251340  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:34:34.254861  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:34:34.290093  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:34:34.290181  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.319140  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.349724  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:34:34.351134  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:34.354155  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354477  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:34.354499  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354753  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:34:34.358724  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:34.371098  310801 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:34:34.371240  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:34.371296  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:34.405312  310801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:34:34.405419  310801 ssh_runner.go:195] Run: which lz4
	I1205 20:34:34.409438  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:34:34.409558  310801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:34:34.413636  310801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:34:34.413680  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:34:35.688964  310801 crio.go:462] duration metric: took 1.279440398s to copy over tarball
	I1205 20:34:35.689045  310801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:34:37.772729  310801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083628711s)
	I1205 20:34:37.772773  310801 crio.go:469] duration metric: took 2.083775707s to extract the tarball
	I1205 20:34:37.772784  310801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:34:37.810322  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:37.853195  310801 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:34:37.853229  310801 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:34:37.853239  310801 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:34:37.853389  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:34:37.853483  310801 ssh_runner.go:195] Run: crio config
	I1205 20:34:37.904941  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:37.904967  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:37.904981  310801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:34:37.905015  310801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:34:37.905154  310801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:34:37.905183  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:34:37.905229  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:34:37.920877  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:34:37.921012  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:34:37.921087  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:34:37.930861  310801 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:34:37.930952  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:34:37.940283  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:34:37.956877  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:34:37.973504  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:34:37.990145  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 20:34:38.006265  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:34:38.010189  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:38.022257  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:38.140067  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:34:38.157890  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:34:38.157932  310801 certs.go:194] generating shared ca certs ...
	I1205 20:34:38.157956  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.158149  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:34:38.158208  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:34:38.158222  310801 certs.go:256] generating profile certs ...
	I1205 20:34:38.158295  310801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:34:38.158314  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt with IP's: []
	I1205 20:34:38.310974  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt ...
	I1205 20:34:38.311018  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt: {Name:mkf3aecb8b9ad227608c6977c2ad30cfc55949b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311241  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key ...
	I1205 20:34:38.311266  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key: {Name:mkfab3a0d79e1baa864757b84edfb7968d976df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311382  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772
	I1205 20:34:38.311402  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I1205 20:34:38.414671  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 ...
	I1205 20:34:38.414714  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772: {Name:mkc29737ec8270e2af482fa3e0afb3df1551e296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.414925  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 ...
	I1205 20:34:38.414944  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772: {Name:mk5a1762b7078753229c19ae4d408dd983181bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.415108  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:34:38.415228  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:34:38.415320  310801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:34:38.415337  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt with IP's: []
	I1205 20:34:38.595265  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt ...
	I1205 20:34:38.595307  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt: {Name:mke4b60d010e9a42985a4147d8ca20fd58cfe926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595513  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key ...
	I1205 20:34:38.595526  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key: {Name:mkc40847c87fbb64accdbdfed18b0a1220dd4fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595607  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:34:38.595627  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:34:38.595641  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:34:38.595656  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:34:38.595671  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:34:38.595687  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:34:38.595702  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:34:38.595721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:34:38.595781  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:34:38.595820  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:34:38.595832  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:34:38.595867  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:34:38.595927  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:34:38.595965  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:34:38.596013  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:38.596047  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.596065  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.596080  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.596679  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:34:38.621836  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:34:38.645971  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:34:38.669572  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:34:38.692394  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:34:38.714950  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:34:38.737673  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:34:38.760143  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:34:38.782837  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:34:38.804959  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:34:38.827699  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:34:38.850292  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:34:38.866443  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:34:38.872267  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:34:38.883530  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887895  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887977  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.893617  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:34:38.906999  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:34:38.918595  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924117  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924185  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.932047  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:34:38.945495  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:34:38.961962  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966385  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966443  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.971854  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:34:38.983000  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:34:38.987127  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:34:38.987198  310801 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:38.987278  310801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:34:38.987360  310801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:34:39.023266  310801 cri.go:89] found id: ""
	I1205 20:34:39.023363  310801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:34:39.033877  310801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:34:39.044224  310801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:34:39.054571  310801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:34:39.054597  310801 kubeadm.go:157] found existing configuration files:
	
	I1205 20:34:39.054653  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:34:39.064431  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:34:39.064513  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:34:39.074366  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:34:39.083912  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:34:39.083984  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:34:39.093938  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.103398  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:34:39.103465  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.113094  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:34:39.122507  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:34:39.122597  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:34:39.132005  310801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:34:39.228908  310801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:34:39.229049  310801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:34:39.329735  310801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:34:39.329925  310801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:34:39.330069  310801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:34:39.340103  310801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:34:39.373910  310801 out.go:235]   - Generating certificates and keys ...
	I1205 20:34:39.374072  310801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:34:39.374147  310801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:34:39.462096  310801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:34:39.625431  310801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:34:39.899737  310801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:34:40.026923  310801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:34:40.326605  310801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:34:40.326736  310801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:40.487273  310801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:34:40.487463  310801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:41.025029  310801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:34:41.081102  310801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:34:41.372777  310801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:34:41.372851  310801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:34:41.470469  310801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:34:41.550016  310801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:34:41.829563  310801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:34:41.903888  310801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:34:42.075688  310801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:34:42.076191  310801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:34:42.079642  310801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:34:42.116791  310801 out.go:235]   - Booting up control plane ...
	I1205 20:34:42.116956  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:34:42.117092  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:34:42.117208  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:34:42.117347  310801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:34:42.117444  310801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:34:42.117492  310801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:34:42.242074  310801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:34:42.242211  310801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:34:42.743099  310801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.406858ms
	I1205 20:34:42.743201  310801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:34:48.715396  310801 kubeadm.go:310] [api-check] The API server is healthy after 5.976028105s
	I1205 20:34:48.727254  310801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:34:48.744015  310801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:34:49.271812  310801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:34:49.272046  310801 kubeadm.go:310] [mark-control-plane] Marking the node ha-689539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:34:49.283178  310801 kubeadm.go:310] [bootstrap-token] Using token: ynd0vv.39hctrjjdwln7xrk
	I1205 20:34:49.284635  310801 out.go:235]   - Configuring RBAC rules ...
	I1205 20:34:49.284805  310801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:34:49.298869  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:34:49.307342  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:34:49.311034  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:34:49.314220  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:34:49.318275  310801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:34:49.336336  310801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:34:49.603608  310801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:34:50.123229  310801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:34:50.123255  310801 kubeadm.go:310] 
	I1205 20:34:50.123360  310801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:34:50.123388  310801 kubeadm.go:310] 
	I1205 20:34:50.123496  310801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:34:50.123533  310801 kubeadm.go:310] 
	I1205 20:34:50.123584  310801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:34:50.123672  310801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:34:50.123755  310801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:34:50.123771  310801 kubeadm.go:310] 
	I1205 20:34:50.123856  310801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:34:50.123868  310801 kubeadm.go:310] 
	I1205 20:34:50.123942  310801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:34:50.123957  310801 kubeadm.go:310] 
	I1205 20:34:50.124045  310801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:34:50.124156  310801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:34:50.124256  310801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:34:50.124269  310801 kubeadm.go:310] 
	I1205 20:34:50.124397  310801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:34:50.124510  310801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:34:50.124522  310801 kubeadm.go:310] 
	I1205 20:34:50.124645  310801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.124778  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:34:50.124879  310801 kubeadm.go:310] 	--control-plane 
	I1205 20:34:50.124896  310801 kubeadm.go:310] 
	I1205 20:34:50.125023  310801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:34:50.125040  310801 kubeadm.go:310] 
	I1205 20:34:50.125138  310801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.125303  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:34:50.125442  310801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:34:50.125462  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:50.125470  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:50.127293  310801 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:34:50.128597  310801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:34:50.133712  310801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:34:50.133735  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:34:50.151910  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:34:50.498891  310801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:34:50.498983  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:50.498995  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539 minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=true
	I1205 20:34:50.513638  310801 ops.go:34] apiserver oom_adj: -16
	I1205 20:34:50.590747  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.091486  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.591491  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.091553  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.591289  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.091686  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.194917  310801 kubeadm.go:1113] duration metric: took 2.696013148s to wait for elevateKubeSystemPrivileges
	I1205 20:34:53.194977  310801 kubeadm.go:394] duration metric: took 14.207781964s to StartCluster
	I1205 20:34:53.195006  310801 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.195117  310801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.198426  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.198793  310801 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.198831  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:34:53.198863  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:34:53.198850  310801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:34:53.198946  310801 addons.go:69] Setting storage-provisioner=true in profile "ha-689539"
	I1205 20:34:53.198964  310801 addons.go:69] Setting default-storageclass=true in profile "ha-689539"
	I1205 20:34:53.198979  310801 addons.go:234] Setting addon storage-provisioner=true in "ha-689539"
	I1205 20:34:53.198988  310801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-689539"
	I1205 20:34:53.199021  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.199090  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.199551  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199570  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199599  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.199609  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.215764  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1205 20:34:53.216062  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I1205 20:34:53.216436  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.216527  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.217017  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217050  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217168  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217198  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217403  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217563  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217568  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.218173  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.218228  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.219954  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.220226  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:34:53.220737  310801 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 20:34:53.220963  310801 addons.go:234] Setting addon default-storageclass=true in "ha-689539"
	I1205 20:34:53.221000  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.221268  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.221303  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.235358  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I1205 20:34:53.235938  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.236563  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.236595  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.236975  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.237206  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.237645  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1205 20:34:53.238195  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.238727  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.238753  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.239124  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.239183  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.239643  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.239697  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.241617  310801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:34:53.243036  310801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.243058  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:34:53.243080  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.247044  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247514  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.247542  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247718  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.248011  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.248218  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.248413  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.257997  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I1205 20:34:53.258521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.259183  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.259218  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.259691  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.259961  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.262068  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.262345  310801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.262363  310801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:34:53.262386  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.265363  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.265818  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.265848  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.266018  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.266213  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.266327  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.266435  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.311906  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:34:53.428778  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.457287  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.655441  310801 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:34:53.958432  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958460  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958502  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958541  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958824  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958842  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958852  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958860  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958920  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.958929  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958944  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958951  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958957  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.959133  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959149  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959214  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.959271  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959300  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959388  310801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:34:53.959421  310801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:34:53.959540  310801 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:34:53.959549  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.959559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.959569  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.981877  310801 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1205 20:34:53.982523  310801 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:34:53.982543  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.982553  310801 round_trippers.go:473]     Content-Type: application/json
	I1205 20:34:53.982558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.982562  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.985387  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:34:53.985542  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.985554  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.985883  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.985918  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.985939  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.987986  310801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 20:34:53.989183  310801 addons.go:510] duration metric: took 790.33722ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 20:34:53.989228  310801 start.go:246] waiting for cluster config update ...
	I1205 20:34:53.989258  310801 start.go:255] writing updated cluster config ...
	I1205 20:34:53.991007  310801 out.go:201] 
	I1205 20:34:53.992546  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.992653  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.994377  310801 out.go:177] * Starting "ha-689539-m02" control-plane node in "ha-689539" cluster
	I1205 20:34:53.995700  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:53.995727  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:53.995849  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:53.995862  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:53.995934  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.996107  310801 start.go:360] acquireMachinesLock for ha-689539-m02: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:53.996153  310801 start.go:364] duration metric: took 23.521µs to acquireMachinesLock for "ha-689539-m02"
	I1205 20:34:53.996172  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.996237  310801 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 20:34:53.998557  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:53.998670  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.998722  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:54.015008  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1205 20:34:54.015521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:54.016066  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:54.016091  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:54.016507  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:54.016709  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:34:54.016933  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:34:54.017199  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:54.017236  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:54.017303  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:54.017352  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017375  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017449  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:54.017479  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017495  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017521  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:54.017533  310801 main.go:141] libmachine: (ha-689539-m02) Calling .PreCreateCheck
	I1205 20:34:54.017789  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:34:54.018296  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:54.018313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .Create
	I1205 20:34:54.018519  310801 main.go:141] libmachine: (ha-689539-m02) Creating KVM machine...
	I1205 20:34:54.019903  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing default KVM network
	I1205 20:34:54.020058  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing private KVM network mk-ha-689539
	I1205 20:34:54.020167  310801 main.go:141] libmachine: (ha-689539-m02) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.020190  310801 main.go:141] libmachine: (ha-689539-m02) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:54.020273  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.020159  311180 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.020403  310801 main.go:141] libmachine: (ha-689539-m02) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:54.317847  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.317662  311180 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa...
	I1205 20:34:54.529086  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.528946  311180 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk...
	I1205 20:34:54.529124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing magic tar header
	I1205 20:34:54.529140  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing SSH key tar header
	I1205 20:34:54.529158  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.529070  311180 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.529265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02
	I1205 20:34:54.529295  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 (perms=drwx------)
	I1205 20:34:54.529308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:54.529327  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.529337  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:54.529349  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:54.529360  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:54.529372  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:54.529383  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home
	I1205 20:34:54.529398  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:54.529416  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:54.529429  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:54.529443  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:54.529454  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Skipping /home - not owner
	I1205 20:34:54.529461  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:54.530562  310801 main.go:141] libmachine: (ha-689539-m02) define libvirt domain using xml: 
	I1205 20:34:54.530603  310801 main.go:141] libmachine: (ha-689539-m02) <domain type='kvm'>
	I1205 20:34:54.530622  310801 main.go:141] libmachine: (ha-689539-m02)   <name>ha-689539-m02</name>
	I1205 20:34:54.530636  310801 main.go:141] libmachine: (ha-689539-m02)   <memory unit='MiB'>2200</memory>
	I1205 20:34:54.530645  310801 main.go:141] libmachine: (ha-689539-m02)   <vcpu>2</vcpu>
	I1205 20:34:54.530652  310801 main.go:141] libmachine: (ha-689539-m02)   <features>
	I1205 20:34:54.530662  310801 main.go:141] libmachine: (ha-689539-m02)     <acpi/>
	I1205 20:34:54.530667  310801 main.go:141] libmachine: (ha-689539-m02)     <apic/>
	I1205 20:34:54.530672  310801 main.go:141] libmachine: (ha-689539-m02)     <pae/>
	I1205 20:34:54.530676  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.530682  310801 main.go:141] libmachine: (ha-689539-m02)   </features>
	I1205 20:34:54.530687  310801 main.go:141] libmachine: (ha-689539-m02)   <cpu mode='host-passthrough'>
	I1205 20:34:54.530691  310801 main.go:141] libmachine: (ha-689539-m02)   
	I1205 20:34:54.530700  310801 main.go:141] libmachine: (ha-689539-m02)   </cpu>
	I1205 20:34:54.530705  310801 main.go:141] libmachine: (ha-689539-m02)   <os>
	I1205 20:34:54.530714  310801 main.go:141] libmachine: (ha-689539-m02)     <type>hvm</type>
	I1205 20:34:54.530720  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='cdrom'/>
	I1205 20:34:54.530727  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='hd'/>
	I1205 20:34:54.530733  310801 main.go:141] libmachine: (ha-689539-m02)     <bootmenu enable='no'/>
	I1205 20:34:54.530737  310801 main.go:141] libmachine: (ha-689539-m02)   </os>
	I1205 20:34:54.530742  310801 main.go:141] libmachine: (ha-689539-m02)   <devices>
	I1205 20:34:54.530747  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='cdrom'>
	I1205 20:34:54.530762  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/boot2docker.iso'/>
	I1205 20:34:54.530777  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:54.530792  310801 main.go:141] libmachine: (ha-689539-m02)       <readonly/>
	I1205 20:34:54.530801  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530835  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='disk'>
	I1205 20:34:54.530866  310801 main.go:141] libmachine: (ha-689539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:54.530888  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk'/>
	I1205 20:34:54.530900  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hda' bus='virtio'/>
	I1205 20:34:54.530910  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530920  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.530930  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='mk-ha-689539'/>
	I1205 20:34:54.530940  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.530948  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.530963  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.531000  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='default'/>
	I1205 20:34:54.531021  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.531046  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.531060  310801 main.go:141] libmachine: (ha-689539-m02)     <serial type='pty'>
	I1205 20:34:54.531070  310801 main.go:141] libmachine: (ha-689539-m02)       <target port='0'/>
	I1205 20:34:54.531080  310801 main.go:141] libmachine: (ha-689539-m02)     </serial>
	I1205 20:34:54.531092  310801 main.go:141] libmachine: (ha-689539-m02)     <console type='pty'>
	I1205 20:34:54.531101  310801 main.go:141] libmachine: (ha-689539-m02)       <target type='serial' port='0'/>
	I1205 20:34:54.531113  310801 main.go:141] libmachine: (ha-689539-m02)     </console>
	I1205 20:34:54.531124  310801 main.go:141] libmachine: (ha-689539-m02)     <rng model='virtio'>
	I1205 20:34:54.531149  310801 main.go:141] libmachine: (ha-689539-m02)       <backend model='random'>/dev/random</backend>
	I1205 20:34:54.531171  310801 main.go:141] libmachine: (ha-689539-m02)     </rng>
	I1205 20:34:54.531193  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531210  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531219  310801 main.go:141] libmachine: (ha-689539-m02)   </devices>
	I1205 20:34:54.531228  310801 main.go:141] libmachine: (ha-689539-m02) </domain>
	I1205 20:34:54.531253  310801 main.go:141] libmachine: (ha-689539-m02) 
	I1205 20:34:54.538318  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:db:6c:41 in network default
	I1205 20:34:54.538874  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring networks are active...
	I1205 20:34:54.538905  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:54.539900  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network default is active
	I1205 20:34:54.540256  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network mk-ha-689539 is active
	I1205 20:34:54.540685  310801 main.go:141] libmachine: (ha-689539-m02) Getting domain xml...
	I1205 20:34:54.541702  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:55.795769  310801 main.go:141] libmachine: (ha-689539-m02) Waiting to get IP...
	I1205 20:34:55.796704  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:55.797107  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:55.797137  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:55.797080  311180 retry.go:31] will retry after 248.666925ms: waiting for machine to come up
	I1205 20:34:56.047775  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.048308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.048345  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.048228  311180 retry.go:31] will retry after 275.164049ms: waiting for machine to come up
	I1205 20:34:56.324858  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.325265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.325293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.325230  311180 retry.go:31] will retry after 471.642082ms: waiting for machine to come up
	I1205 20:34:56.798901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.799411  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.799445  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.799337  311180 retry.go:31] will retry after 372.986986ms: waiting for machine to come up
	I1205 20:34:57.173842  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.174284  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.174315  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.174243  311180 retry.go:31] will retry after 491.328215ms: waiting for machine to come up
	I1205 20:34:57.666917  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.667363  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.667388  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.667340  311180 retry.go:31] will retry after 701.698041ms: waiting for machine to come up
	I1205 20:34:58.370293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:58.370782  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:58.370813  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:58.370725  311180 retry.go:31] will retry after 750.048133ms: waiting for machine to come up
	I1205 20:34:59.121998  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:59.122452  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:59.122482  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:59.122416  311180 retry.go:31] will retry after 1.373917427s: waiting for machine to come up
	I1205 20:35:00.498001  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:00.498527  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:00.498564  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:00.498461  311180 retry.go:31] will retry after 1.273603268s: waiting for machine to come up
	I1205 20:35:01.773536  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:01.774024  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:01.774055  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:01.773976  311180 retry.go:31] will retry after 1.863052543s: waiting for machine to come up
	I1205 20:35:03.640228  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:03.640744  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:03.640780  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:03.640681  311180 retry.go:31] will retry after 2.126872214s: waiting for machine to come up
	I1205 20:35:05.768939  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:05.769465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:05.769495  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:05.769419  311180 retry.go:31] will retry after 2.492593838s: waiting for machine to come up
	I1205 20:35:08.265013  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:08.265518  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:08.265557  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:08.265445  311180 retry.go:31] will retry after 4.136586499s: waiting for machine to come up
	I1205 20:35:12.405674  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:12.406165  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:12.406195  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:12.406099  311180 retry.go:31] will retry after 4.175170751s: waiting for machine to come up
	I1205 20:35:16.583008  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583448  310801 main.go:141] libmachine: (ha-689539-m02) Found IP for machine: 192.168.39.224
	I1205 20:35:16.583483  310801 main.go:141] libmachine: (ha-689539-m02) Reserving static IP address...
	I1205 20:35:16.583508  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583773  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "ha-689539-m02", mac: "52:54:00:01:ca:45", ip: "192.168.39.224"} in network mk-ha-689539
	I1205 20:35:16.666774  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:16.666819  310801 main.go:141] libmachine: (ha-689539-m02) Reserved static IP address: 192.168.39.224
	I1205 20:35:16.666833  310801 main.go:141] libmachine: (ha-689539-m02) Waiting for SSH to be available...
	I1205 20:35:16.669680  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.670217  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539
	I1205 20:35:16.670248  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find defined IP address of network mk-ha-689539 interface with MAC address 52:54:00:01:ca:45
	I1205 20:35:16.670412  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:16.670440  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:16.670473  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:16.670490  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:16.670506  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:16.675197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:35:16.675236  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:35:16.675246  310801 main.go:141] libmachine: (ha-689539-m02) DBG | command : exit 0
	I1205 20:35:16.675253  310801 main.go:141] libmachine: (ha-689539-m02) DBG | err     : exit status 255
	I1205 20:35:16.675269  310801 main.go:141] libmachine: (ha-689539-m02) DBG | output  : 
	I1205 20:35:19.675465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:19.678124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678615  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.678646  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678752  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:19.678781  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:19.678817  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:19.678840  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:19.678857  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:19.805836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 20:35:19.806152  310801 main.go:141] libmachine: (ha-689539-m02) KVM machine creation complete!
	I1205 20:35:19.806464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:19.807084  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807474  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:35:19.807492  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetState
	I1205 20:35:19.808787  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:35:19.808804  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:35:19.808811  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:35:19.808818  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.811344  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811714  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.811743  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811928  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.812132  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812422  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.812622  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.812860  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.812871  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:35:19.921262  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:19.921299  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:35:19.921312  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.924600  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925051  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.925075  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925275  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.925497  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925651  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925794  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.925996  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.926221  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.926235  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:35:20.039067  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:35:20.039180  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:35:20.039192  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:35:20.039205  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039552  310801 buildroot.go:166] provisioning hostname "ha-689539-m02"
	I1205 20:35:20.039589  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039855  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.043233  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.043789  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.043820  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.044027  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.044236  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044433  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044659  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.044843  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.045030  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.045042  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m02 && echo "ha-689539-m02" | sudo tee /etc/hostname
	I1205 20:35:20.173519  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m02
	
	I1205 20:35:20.173562  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.176643  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.176967  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.176994  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.177264  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.177464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177721  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177868  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.178085  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.178312  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.178329  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:35:20.299145  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:20.299194  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:35:20.299221  310801 buildroot.go:174] setting up certificates
	I1205 20:35:20.299251  310801 provision.go:84] configureAuth start
	I1205 20:35:20.299278  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.299618  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.302873  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.303234  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303352  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.305836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306274  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.306298  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306450  310801 provision.go:143] copyHostCerts
	I1205 20:35:20.306489  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306536  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:35:20.306547  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306613  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:35:20.306694  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306712  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:35:20.306719  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306743  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:35:20.306790  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306807  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:35:20.306813  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306832  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:35:20.306880  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m02 san=[127.0.0.1 192.168.39.224 ha-689539-m02 localhost minikube]
	I1205 20:35:20.462180  310801 provision.go:177] copyRemoteCerts
	I1205 20:35:20.462244  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:35:20.462273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.465164  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465498  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.465526  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465765  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.465979  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.466125  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.466256  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.552142  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:35:20.552248  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:35:20.577611  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:35:20.577693  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:35:20.602829  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:35:20.602927  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:35:20.629296  310801 provision.go:87] duration metric: took 330.013316ms to configureAuth
	I1205 20:35:20.629334  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:35:20.629554  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:20.629672  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.632608  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633010  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.633046  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633219  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.633418  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633617  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633785  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.634021  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.634203  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.634221  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:35:20.861660  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:35:20.861695  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:35:20.861706  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetURL
	I1205 20:35:20.863182  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using libvirt version 6000000
	I1205 20:35:20.865580  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866002  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.866022  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866305  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:35:20.866329  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:35:20.866337  310801 client.go:171] duration metric: took 26.849092016s to LocalClient.Create
	I1205 20:35:20.866366  310801 start.go:167] duration metric: took 26.849169654s to libmachine.API.Create "ha-689539"
	I1205 20:35:20.866385  310801 start.go:293] postStartSetup for "ha-689539-m02" (driver="kvm2")
	I1205 20:35:20.866396  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:35:20.866415  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:20.866737  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:35:20.866782  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.869117  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869511  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.869539  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869712  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.869922  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.870094  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.870213  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.956165  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:35:20.960554  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:35:20.960593  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:35:20.960663  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:35:20.960745  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:35:20.960756  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:35:20.960845  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:35:20.970171  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:20.993469  310801 start.go:296] duration metric: took 127.065366ms for postStartSetup
	I1205 20:35:20.993548  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:20.994261  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.996956  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997403  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.997431  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997694  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:35:20.997894  310801 start.go:128] duration metric: took 27.001645944s to createHost
	I1205 20:35:20.997947  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.000356  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000768  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.000793  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000932  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.001164  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001372  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001567  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.001800  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:21.002023  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:21.002035  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:35:21.114783  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430921.091468988
	
	I1205 20:35:21.114813  310801 fix.go:216] guest clock: 1733430921.091468988
	I1205 20:35:21.114823  310801 fix.go:229] Guest: 2024-12-05 20:35:21.091468988 +0000 UTC Remote: 2024-12-05 20:35:20.997930274 +0000 UTC m=+72.965807310 (delta=93.538714ms)
	I1205 20:35:21.114853  310801 fix.go:200] guest clock delta is within tolerance: 93.538714ms
	I1205 20:35:21.114861  310801 start.go:83] releasing machines lock for "ha-689539-m02", held for 27.118697006s
	I1205 20:35:21.114886  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.115206  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:21.118066  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.118466  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.118504  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.121045  310801 out.go:177] * Found network options:
	I1205 20:35:21.122608  310801 out.go:177]   - NO_PROXY=192.168.39.220
	W1205 20:35:21.124023  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.124097  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.124832  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125105  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125251  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:35:21.125326  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	W1205 20:35:21.125332  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.125435  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:35:21.125468  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.128474  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128563  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128871  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.128901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129000  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.129022  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129073  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129233  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129232  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129435  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129437  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129634  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129634  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.129803  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.365680  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:35:21.371668  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:35:21.371782  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:35:21.388230  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:35:21.388261  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:35:21.388348  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:35:21.404768  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:35:21.419149  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:35:21.419231  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:35:21.433110  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:35:21.447375  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:35:21.563926  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:35:21.729278  310801 docker.go:233] disabling docker service ...
	I1205 20:35:21.729378  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:35:21.744065  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:35:21.757106  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:35:21.878877  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:35:21.983688  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:35:21.997947  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:35:22.016485  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:35:22.016555  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.027185  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:35:22.027270  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.037892  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.048316  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.059131  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:35:22.075255  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.086233  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.103682  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.114441  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:35:22.124360  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:35:22.124442  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:35:22.138043  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:35:22.147996  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:22.253398  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:35:22.348717  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:35:22.348790  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:35:22.353405  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:35:22.353468  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:35:22.357215  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:35:22.393844  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:35:22.393959  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.422018  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.452780  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:35:22.454193  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:35:22.455398  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:22.458243  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458611  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:22.458649  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458851  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:35:22.463124  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:22.475841  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:35:22.476087  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:22.476420  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.476470  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.492198  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I1205 20:35:22.492793  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.493388  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.493418  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.493835  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.494104  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:35:22.495827  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:22.496123  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.496160  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.512684  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I1205 20:35:22.513289  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.513852  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.513877  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.514257  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.514474  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:22.514658  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.224
	I1205 20:35:22.514672  310801 certs.go:194] generating shared ca certs ...
	I1205 20:35:22.514692  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.514826  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:35:22.514868  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:35:22.514875  310801 certs.go:256] generating profile certs ...
	I1205 20:35:22.514942  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:35:22.514966  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736
	I1205 20:35:22.514982  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.254]
	I1205 20:35:22.799808  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 ...
	I1205 20:35:22.799844  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736: {Name:mk805c9f0c218cfc1a14cc95ce5560d63a919c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800063  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 ...
	I1205 20:35:22.800084  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736: {Name:mk878dc23fa761ab4aecc158abe1405fbc550219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800189  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:35:22.800337  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:35:22.800471  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:35:22.800490  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:35:22.800508  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:35:22.800524  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:35:22.800539  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:35:22.800554  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:35:22.800569  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:35:22.800578  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:35:22.800588  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:35:22.800649  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:35:22.800680  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:35:22.800690  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:35:22.800714  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:35:22.800740  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:35:22.800782  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:35:22.800829  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:22.800856  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:22.800870  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:35:22.800883  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:35:22.800924  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:22.803915  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804323  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:22.804357  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804510  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:22.804779  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:22.804968  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:22.805127  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:22.874336  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:35:22.878799  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:35:22.889481  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:35:22.893603  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:35:22.907201  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:35:22.911129  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:35:22.921562  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:35:22.925468  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:35:22.935462  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:35:22.939312  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:35:22.949250  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:35:22.953120  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:35:22.964047  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:35:22.988860  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:35:23.013850  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:35:23.037874  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:35:23.062975  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:35:23.087802  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:35:23.112226  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:35:23.139642  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:35:23.168141  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:35:23.193470  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:35:23.218935  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:35:23.243452  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:35:23.261775  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:35:23.279011  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:35:23.296521  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:35:23.313399  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:35:23.330608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:35:23.349181  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:35:23.366287  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:35:23.372023  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:35:23.383498  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.387933  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.388026  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.393863  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:35:23.405145  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:35:23.416665  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421806  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421882  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.427892  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:35:23.439291  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:35:23.450645  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455301  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455397  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.461088  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:35:23.473062  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:35:23.477238  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:35:23.477315  310801 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.31.2 crio true true} ...
	I1205 20:35:23.477412  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:35:23.477446  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:35:23.477488  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:35:23.494130  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:35:23.494206  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:35:23.494265  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.504559  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:35:23.504639  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.515268  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 20:35:23.515267  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:35:23.515267  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 20:35:23.515420  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.515485  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.520360  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:35:23.520397  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:35:24.329721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.329837  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.335194  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:35:24.335241  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:35:24.693728  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:24.707996  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.708127  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.712643  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:35:24.712685  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:35:25.030158  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:35:25.039864  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:35:25.056953  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:35:25.074038  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:35:25.090341  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:35:25.094291  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:25.106549  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:25.251421  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:25.281544  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:25.281958  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:25.282025  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:25.298815  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1205 20:35:25.299446  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:25.299916  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:25.299940  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:25.300264  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:25.300471  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:25.300647  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:35:25.300755  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:35:25.300777  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:25.303962  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304378  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:25.304416  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304612  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:25.304845  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:25.305034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:25.305189  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:25.467206  310801 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:25.467286  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I1205 20:35:47.115820  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (21.648499033s)
	I1205 20:35:47.115867  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:35:47.674102  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m02 minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:35:47.783659  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:35:47.899441  310801 start.go:319] duration metric: took 22.598789448s to joinCluster
	I1205 20:35:47.899529  310801 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.899871  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.901544  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.903164  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:48.171147  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:48.196654  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:35:48.197028  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:35:48.197120  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:35:48.197520  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:35:48.197656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.197669  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.197681  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.197693  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.214799  310801 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:35:48.697777  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.697812  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.697824  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.697833  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.703691  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.198191  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.198217  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.198225  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.198229  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.204218  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.698048  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.698079  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.698090  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.698096  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.705663  310801 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:35:50.198629  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.198656  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.198669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.198675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.202111  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:50.202581  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:50.698434  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.698457  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.698465  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.698469  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.702335  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.197943  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.197971  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.197981  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.197985  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.201567  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.698634  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.698668  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.698680  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.698687  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.702470  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.198285  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.198318  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.198331  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.198338  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.202116  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.202820  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:52.697909  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.697940  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.697953  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.697959  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.700998  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.198023  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.198047  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.198056  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.198059  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.201259  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.698438  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.698462  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.698478  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.698482  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.701883  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.198346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.198373  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.198381  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.198386  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.202207  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.203013  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:54.698384  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.698407  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.698415  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.698422  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.703135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:35:55.198075  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.198102  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.198111  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.198116  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.275835  310801 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I1205 20:35:55.698292  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.698327  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.698343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.698347  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.701831  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.197819  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.197847  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.197856  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.197861  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.201202  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.698240  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.698288  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.698299  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.698304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.701586  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.702160  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:57.198590  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.198622  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.198633  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.198638  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.201959  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:57.698128  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.698159  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.698170  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.698175  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.703388  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:58.198316  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.198343  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.198352  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.198357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.201617  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.698669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.698694  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.698706  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.698710  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.702292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.702971  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:59.198697  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.198726  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.198739  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.198747  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.205545  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:35:59.698504  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.698536  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.698560  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.702266  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.198245  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.198270  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.198279  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.198283  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.201787  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.698510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.698544  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.698563  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.701802  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.197953  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.197983  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.197994  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.197999  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.201035  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.201711  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:01.698167  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.698198  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.698210  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.698215  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.701264  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.198110  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.198141  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.198152  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.198157  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.201468  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.698626  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.698659  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.698669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.698675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.701881  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.198737  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.198763  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.198774  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.198779  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.202428  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.202953  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:03.698736  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.698768  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.698780  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.698788  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.702162  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.197743  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.197773  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.197784  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.197791  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.201284  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.698126  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.698155  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.698164  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.698168  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.701888  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.198088  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.198121  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.198131  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.198138  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.201797  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.698476  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.698506  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.698515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.698520  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.701875  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.702580  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:06.198021  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.198061  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.198069  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.198074  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.201540  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.202101  310801 node_ready.go:49] node "ha-689539-m02" has status "Ready":"True"
	I1205 20:36:06.202126  310801 node_ready.go:38] duration metric: took 18.004581739s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:36:06.202140  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:06.202253  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:06.202268  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.202278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.202285  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.206754  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.212677  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.212799  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:36:06.212813  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.212822  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.212827  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.215643  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.216276  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.216293  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.216301  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.216304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.218813  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.219400  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.219422  310801 pod_ready.go:82] duration metric: took 6.710961ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219433  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219519  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:36:06.219530  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.219537  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.219544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.221986  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.222730  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.222744  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.222752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.222757  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.225041  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.225536  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.225559  310801 pod_ready.go:82] duration metric: took 6.118464ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225582  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:36:06.225668  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.225684  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.225696  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.228280  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.228948  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.228962  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.228970  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.228974  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.231708  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.232206  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.232225  310801 pod_ready.go:82] duration metric: took 6.631337ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232234  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232328  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:36:06.232338  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.232347  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.232357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.234717  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.235313  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.235328  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.235336  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.235340  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.237446  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.237958  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.237979  310801 pod_ready.go:82] duration metric: took 5.738833ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.237997  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.398468  310801 request.go:632] Waited for 160.38501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398582  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398592  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.398601  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.398605  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.402334  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.598805  310801 request.go:632] Waited for 195.477134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598897  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598903  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.598911  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.598914  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.602945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.603481  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.603505  310801 pod_ready.go:82] duration metric: took 365.497043ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.603516  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.798685  310801 request.go:632] Waited for 195.084248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798771  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798776  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.798786  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.798792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.802375  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.998825  310801 request.go:632] Waited for 195.407022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998895  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998900  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.998908  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.998913  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.003073  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.003620  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.003641  310801 pod_ready.go:82] duration metric: took 400.118288ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.003652  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.198723  310801 request.go:632] Waited for 194.973944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198815  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198822  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.198834  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.198844  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.202792  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.398908  310801 request.go:632] Waited for 195.413458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.398993  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.399006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.399019  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.399029  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.403088  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.403800  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.403838  310801 pod_ready.go:82] duration metric: took 400.178189ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.403856  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.598771  310801 request.go:632] Waited for 194.816012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598840  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598845  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.598862  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.598869  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.602566  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.798831  310801 request.go:632] Waited for 195.438007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798985  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798998  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.799015  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.799023  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.803171  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.803823  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.803849  310801 pod_ready.go:82] duration metric: took 399.978899ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.803864  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.998893  310801 request.go:632] Waited for 194.90975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.998995  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.999006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.999033  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.999050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.003019  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.198483  310801 request.go:632] Waited for 194.725493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198570  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198580  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.198588  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.198592  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.202279  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.202805  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.202824  310801 pod_ready.go:82] duration metric: took 398.949898ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.202837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.399003  310801 request.go:632] Waited for 196.061371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399102  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399110  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.399126  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.399137  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.404511  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:36:08.598657  310801 request.go:632] Waited for 193.397123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598817  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598829  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.598837  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.598850  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.602654  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.603461  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.603483  310801 pod_ready.go:82] duration metric: took 400.640164ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.603494  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.798579  310801 request.go:632] Waited for 194.963606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798680  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.798692  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.798704  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.802678  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.998854  310801 request.go:632] Waited for 195.447294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998947  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998954  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.998964  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.998970  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.003138  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.003792  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.003821  310801 pod_ready.go:82] duration metric: took 400.319353ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.003837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.198016  310801 request.go:632] Waited for 194.088845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198132  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198145  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.198158  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.198165  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.201958  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.398942  310801 request.go:632] Waited for 196.371567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399033  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.399044  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.399050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.402750  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.403404  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.403436  310801 pod_ready.go:82] duration metric: took 399.590034ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.403451  310801 pod_ready.go:39] duration metric: took 3.201294497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:09.403471  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:09.403551  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:09.418357  310801 api_server.go:72] duration metric: took 21.51878718s to wait for apiserver process to appear ...
	I1205 20:36:09.418390  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:09.418420  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:36:09.425381  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:36:09.425471  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:36:09.425479  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.425488  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.425494  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.426343  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:36:09.426447  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:09.426464  310801 api_server.go:131] duration metric: took 8.067774ms to wait for apiserver health ...
	I1205 20:36:09.426481  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:09.598951  310801 request.go:632] Waited for 172.364571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599030  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.599038  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.599042  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.603442  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.609057  310801 system_pods.go:59] 17 kube-system pods found
	I1205 20:36:09.609099  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:09.609107  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:09.609113  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:09.609121  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:09.609126  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:09.609130  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:09.609136  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:09.609142  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:09.609149  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:09.609159  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:09.609165  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:09.609174  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:09.609180  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:09.609186  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:09.609192  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:09.609200  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:09.609207  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:09.609218  310801 system_pods.go:74] duration metric: took 182.726007ms to wait for pod list to return data ...
	I1205 20:36:09.609232  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:09.798716  310801 request.go:632] Waited for 189.385773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798784  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798789  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.798797  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.798800  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.803434  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.803720  310801 default_sa.go:45] found service account: "default"
	I1205 20:36:09.803742  310801 default_sa.go:55] duration metric: took 194.50158ms for default service account to be created ...
	I1205 20:36:09.803755  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:09.998902  310801 request.go:632] Waited for 195.036574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998984  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998992  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.999004  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.999012  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.005341  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:36:10.009685  310801 system_pods.go:86] 17 kube-system pods found
	I1205 20:36:10.009721  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:10.009733  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:10.009739  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:10.009745  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:10.009751  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:10.009756  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:10.009760  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:10.009770  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:10.009774  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:10.009778  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:10.009782  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:10.009786  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:10.009789  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:10.009794  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:10.009797  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:10.009802  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:10.009805  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:10.009814  310801 system_pods.go:126] duration metric: took 206.05156ms to wait for k8s-apps to be running ...
	I1205 20:36:10.009825  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:10.009874  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:10.025329  310801 system_svc.go:56] duration metric: took 15.491147ms WaitForService to wait for kubelet
	I1205 20:36:10.025382  310801 kubeadm.go:582] duration metric: took 22.125819174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:10.025410  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:10.199031  310801 request.go:632] Waited for 173.477614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199134  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199143  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:10.199154  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:10.199159  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.202963  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:10.203807  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203836  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203848  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203851  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203855  310801 node_conditions.go:105] duration metric: took 178.44033ms to run NodePressure ...
	I1205 20:36:10.203870  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:36:10.203895  310801 start.go:255] writing updated cluster config ...
	I1205 20:36:10.205987  310801 out.go:201] 
	I1205 20:36:10.207492  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:10.207614  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.209270  310801 out.go:177] * Starting "ha-689539-m03" control-plane node in "ha-689539" cluster
	I1205 20:36:10.210621  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:36:10.210654  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:36:10.210766  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:36:10.210778  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:36:10.210880  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.211060  310801 start.go:360] acquireMachinesLock for ha-689539-m03: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:36:10.211107  310801 start.go:364] duration metric: took 26.599µs to acquireMachinesLock for "ha-689539-m03"
	I1205 20:36:10.211127  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:10.211224  310801 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 20:36:10.213644  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:36:10.213846  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:10.213895  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:10.230607  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 20:36:10.231136  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:10.231708  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:10.231730  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:10.232163  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:10.232486  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:10.232681  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:10.232898  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:36:10.232939  310801 client.go:168] LocalClient.Create starting
	I1205 20:36:10.232979  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:36:10.233029  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233052  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233142  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:36:10.233176  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233191  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233315  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:36:10.233332  310801 main.go:141] libmachine: (ha-689539-m03) Calling .PreCreateCheck
	I1205 20:36:10.233549  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:10.234493  310801 main.go:141] libmachine: Creating machine...
	I1205 20:36:10.234513  310801 main.go:141] libmachine: (ha-689539-m03) Calling .Create
	I1205 20:36:10.234674  310801 main.go:141] libmachine: (ha-689539-m03) Creating KVM machine...
	I1205 20:36:10.236332  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing default KVM network
	I1205 20:36:10.236451  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing private KVM network mk-ha-689539
	I1205 20:36:10.236656  310801 main.go:141] libmachine: (ha-689539-m03) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.236685  310801 main.go:141] libmachine: (ha-689539-m03) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:36:10.236729  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.236616  311584 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.236870  310801 main.go:141] libmachine: (ha-689539-m03) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:36:10.551771  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.551634  311584 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa...
	I1205 20:36:10.671521  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671352  311584 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk...
	I1205 20:36:10.671562  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing magic tar header
	I1205 20:36:10.671575  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing SSH key tar header
	I1205 20:36:10.671584  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671500  311584 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.671596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03
	I1205 20:36:10.671680  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 (perms=drwx------)
	I1205 20:36:10.671707  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:36:10.671718  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:36:10.671731  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.671740  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:36:10.671749  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:36:10.671759  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:36:10.671770  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home
	I1205 20:36:10.671781  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Skipping /home - not owner
	I1205 20:36:10.671795  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:36:10.671811  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:36:10.671827  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:36:10.671837  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:36:10.671843  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:10.672929  310801 main.go:141] libmachine: (ha-689539-m03) define libvirt domain using xml: 
	I1205 20:36:10.672953  310801 main.go:141] libmachine: (ha-689539-m03) <domain type='kvm'>
	I1205 20:36:10.672970  310801 main.go:141] libmachine: (ha-689539-m03)   <name>ha-689539-m03</name>
	I1205 20:36:10.673070  310801 main.go:141] libmachine: (ha-689539-m03)   <memory unit='MiB'>2200</memory>
	I1205 20:36:10.673100  310801 main.go:141] libmachine: (ha-689539-m03)   <vcpu>2</vcpu>
	I1205 20:36:10.673109  310801 main.go:141] libmachine: (ha-689539-m03)   <features>
	I1205 20:36:10.673135  310801 main.go:141] libmachine: (ha-689539-m03)     <acpi/>
	I1205 20:36:10.673151  310801 main.go:141] libmachine: (ha-689539-m03)     <apic/>
	I1205 20:36:10.673157  310801 main.go:141] libmachine: (ha-689539-m03)     <pae/>
	I1205 20:36:10.673164  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673174  310801 main.go:141] libmachine: (ha-689539-m03)   </features>
	I1205 20:36:10.673181  310801 main.go:141] libmachine: (ha-689539-m03)   <cpu mode='host-passthrough'>
	I1205 20:36:10.673187  310801 main.go:141] libmachine: (ha-689539-m03)   
	I1205 20:36:10.673192  310801 main.go:141] libmachine: (ha-689539-m03)   </cpu>
	I1205 20:36:10.673197  310801 main.go:141] libmachine: (ha-689539-m03)   <os>
	I1205 20:36:10.673201  310801 main.go:141] libmachine: (ha-689539-m03)     <type>hvm</type>
	I1205 20:36:10.673243  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='cdrom'/>
	I1205 20:36:10.673298  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='hd'/>
	I1205 20:36:10.673335  310801 main.go:141] libmachine: (ha-689539-m03)     <bootmenu enable='no'/>
	I1205 20:36:10.673362  310801 main.go:141] libmachine: (ha-689539-m03)   </os>
	I1205 20:36:10.673384  310801 main.go:141] libmachine: (ha-689539-m03)   <devices>
	I1205 20:36:10.673401  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='cdrom'>
	I1205 20:36:10.673424  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/boot2docker.iso'/>
	I1205 20:36:10.673445  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hdc' bus='scsi'/>
	I1205 20:36:10.673458  310801 main.go:141] libmachine: (ha-689539-m03)       <readonly/>
	I1205 20:36:10.673469  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673485  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='disk'>
	I1205 20:36:10.673499  310801 main.go:141] libmachine: (ha-689539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:36:10.673516  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk'/>
	I1205 20:36:10.673532  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hda' bus='virtio'/>
	I1205 20:36:10.673544  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673556  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673569  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='mk-ha-689539'/>
	I1205 20:36:10.673579  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673592  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673600  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673612  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='default'/>
	I1205 20:36:10.673625  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673635  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673648  310801 main.go:141] libmachine: (ha-689539-m03)     <serial type='pty'>
	I1205 20:36:10.673660  310801 main.go:141] libmachine: (ha-689539-m03)       <target port='0'/>
	I1205 20:36:10.673672  310801 main.go:141] libmachine: (ha-689539-m03)     </serial>
	I1205 20:36:10.673682  310801 main.go:141] libmachine: (ha-689539-m03)     <console type='pty'>
	I1205 20:36:10.673695  310801 main.go:141] libmachine: (ha-689539-m03)       <target type='serial' port='0'/>
	I1205 20:36:10.673711  310801 main.go:141] libmachine: (ha-689539-m03)     </console>
	I1205 20:36:10.673724  310801 main.go:141] libmachine: (ha-689539-m03)     <rng model='virtio'>
	I1205 20:36:10.673736  310801 main.go:141] libmachine: (ha-689539-m03)       <backend model='random'>/dev/random</backend>
	I1205 20:36:10.673747  310801 main.go:141] libmachine: (ha-689539-m03)     </rng>
	I1205 20:36:10.673756  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673766  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673776  310801 main.go:141] libmachine: (ha-689539-m03)   </devices>
	I1205 20:36:10.673790  310801 main.go:141] libmachine: (ha-689539-m03) </domain>
	I1205 20:36:10.673800  310801 main.go:141] libmachine: (ha-689539-m03) 
	I1205 20:36:10.681042  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:ee:34:51 in network default
	I1205 20:36:10.681639  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring networks are active...
	I1205 20:36:10.681669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:10.682561  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network default is active
	I1205 20:36:10.682898  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network mk-ha-689539 is active
	I1205 20:36:10.683183  310801 main.go:141] libmachine: (ha-689539-m03) Getting domain xml...
	I1205 20:36:10.684006  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:11.968725  310801 main.go:141] libmachine: (ha-689539-m03) Waiting to get IP...
	I1205 20:36:11.969610  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:11.970152  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:11.970185  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:11.970125  311584 retry.go:31] will retry after 234.218675ms: waiting for machine to come up
	I1205 20:36:12.205669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.206261  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.206294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.206205  311584 retry.go:31] will retry after 248.695417ms: waiting for machine to come up
	I1205 20:36:12.456801  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.457402  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.457438  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.457352  311584 retry.go:31] will retry after 446.513744ms: waiting for machine to come up
	I1205 20:36:12.906122  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.906634  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.906661  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.906574  311584 retry.go:31] will retry after 535.02916ms: waiting for machine to come up
	I1205 20:36:13.443469  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:13.443918  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:13.443943  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:13.443872  311584 retry.go:31] will retry after 557.418366ms: waiting for machine to come up
	I1205 20:36:14.002733  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.003294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.003322  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.003249  311584 retry.go:31] will retry after 653.304587ms: waiting for machine to come up
	I1205 20:36:14.658664  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.659072  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.659104  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.659017  311584 retry.go:31] will retry after 755.842871ms: waiting for machine to come up
	I1205 20:36:15.416424  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:15.416833  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:15.416859  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:15.416766  311584 retry.go:31] will retry after 1.249096202s: waiting for machine to come up
	I1205 20:36:16.666996  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:16.667456  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:16.667487  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:16.667406  311584 retry.go:31] will retry after 1.829752255s: waiting for machine to come up
	I1205 20:36:18.499154  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:18.499722  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:18.499754  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:18.499656  311584 retry.go:31] will retry after 2.088301292s: waiting for machine to come up
	I1205 20:36:20.590033  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:20.590599  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:20.590952  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:20.590835  311584 retry.go:31] will retry after 2.856395806s: waiting for machine to come up
	I1205 20:36:23.448567  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:23.449151  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:23.449196  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:23.449071  311584 retry.go:31] will retry after 2.566118647s: waiting for machine to come up
	I1205 20:36:26.016596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:26.017066  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:26.017103  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:26.017002  311584 retry.go:31] will retry after 3.311993098s: waiting for machine to come up
	I1205 20:36:29.332519  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:29.333028  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:29.333062  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:29.332969  311584 retry.go:31] will retry after 5.069674559s: waiting for machine to come up
	I1205 20:36:34.404036  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404592  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404615  310801 main.go:141] libmachine: (ha-689539-m03) Found IP for machine: 192.168.39.133
	I1205 20:36:34.404628  310801 main.go:141] libmachine: (ha-689539-m03) Reserving static IP address...
	I1205 20:36:34.405246  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find host DHCP lease matching {name: "ha-689539-m03", mac: "52:54:00:39:1e:d2", ip: "192.168.39.133"} in network mk-ha-689539
	I1205 20:36:34.488202  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Getting to WaitForSSH function...
	I1205 20:36:34.488243  310801 main.go:141] libmachine: (ha-689539-m03) Reserved static IP address: 192.168.39.133
	I1205 20:36:34.488263  310801 main.go:141] libmachine: (ha-689539-m03) Waiting for SSH to be available...
	I1205 20:36:34.491165  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491686  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.491716  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491906  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH client type: external
	I1205 20:36:34.491935  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa (-rw-------)
	I1205 20:36:34.491973  310801 main.go:141] libmachine: (ha-689539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:36:34.491988  310801 main.go:141] libmachine: (ha-689539-m03) DBG | About to run SSH command:
	I1205 20:36:34.492018  310801 main.go:141] libmachine: (ha-689539-m03) DBG | exit 0
	I1205 20:36:34.613832  310801 main.go:141] libmachine: (ha-689539-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 20:36:34.614085  310801 main.go:141] libmachine: (ha-689539-m03) KVM machine creation complete!
	I1205 20:36:34.614391  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:34.614932  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615098  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615251  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:36:34.615261  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetState
	I1205 20:36:34.616613  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:36:34.616630  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:36:34.616635  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:36:34.616641  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.618898  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619343  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.619376  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619553  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.619760  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.619916  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.620049  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.620212  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.620459  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.620479  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:36:34.717073  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:34.717099  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:36:34.717108  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.720008  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720375  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.720408  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720627  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.720862  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721027  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721142  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.721315  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.721505  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.721517  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:36:34.822906  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:36:34.822984  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:36:34.822991  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:36:34.823000  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823269  310801 buildroot.go:166] provisioning hostname "ha-689539-m03"
	I1205 20:36:34.823307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823547  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.826120  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826479  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.826516  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826688  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.826881  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.827324  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.827499  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.827512  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m03 && echo "ha-689539-m03" | sudo tee /etc/hostname
	I1205 20:36:34.941581  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m03
	
	I1205 20:36:34.941620  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.944840  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.945268  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945576  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.945808  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946279  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.946488  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.946701  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.946720  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:36:35.058548  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:35.058600  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:36:35.058628  310801 buildroot.go:174] setting up certificates
	I1205 20:36:35.058647  310801 provision.go:84] configureAuth start
	I1205 20:36:35.058666  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:35.059012  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.062020  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062410  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.062436  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062601  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.064649  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065013  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.065056  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065157  310801 provision.go:143] copyHostCerts
	I1205 20:36:35.065216  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065250  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:36:35.065260  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065330  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:36:35.065453  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065483  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:36:35.065487  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065514  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:36:35.065573  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065599  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:36:35.065606  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065628  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:36:35.065689  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m03 san=[127.0.0.1 192.168.39.133 ha-689539-m03 localhost minikube]
	I1205 20:36:35.249027  310801 provision.go:177] copyRemoteCerts
	I1205 20:36:35.249088  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:36:35.249117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.252102  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252464  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.252504  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252651  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.252859  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.253052  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.253206  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.336527  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:36:35.336648  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:36:35.364926  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:36:35.365010  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:36:35.389088  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:36:35.389182  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:36:35.413330  310801 provision.go:87] duration metric: took 354.660436ms to configureAuth
	I1205 20:36:35.413369  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:36:35.413628  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:35.413732  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.416617  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417048  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.417083  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417297  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.417511  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417805  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.417979  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.418155  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.418171  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:36:35.630886  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:36:35.630926  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:36:35.630937  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetURL
	I1205 20:36:35.632212  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using libvirt version 6000000
	I1205 20:36:35.634750  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635203  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.635240  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635427  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:36:35.635448  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:36:35.635459  310801 client.go:171] duration metric: took 25.402508958s to LocalClient.Create
	I1205 20:36:35.635491  310801 start.go:167] duration metric: took 25.402598488s to libmachine.API.Create "ha-689539"
	I1205 20:36:35.635506  310801 start.go:293] postStartSetup for "ha-689539-m03" (driver="kvm2")
	I1205 20:36:35.635522  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:36:35.635550  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.635824  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:36:35.635854  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.638327  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638682  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.638711  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638841  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.639048  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.639222  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.639398  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.716587  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:36:35.720718  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:36:35.720755  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:36:35.720843  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:36:35.720950  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:36:35.720963  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:36:35.721055  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:36:35.730580  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:35.754106  310801 start.go:296] duration metric: took 118.58052ms for postStartSetup
	I1205 20:36:35.754171  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:35.754838  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.757466  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.757836  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.757867  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.758185  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:35.758409  310801 start.go:128] duration metric: took 25.547174356s to createHost
	I1205 20:36:35.758437  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.760535  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.760919  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.760950  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.761090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.761312  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761499  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761662  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.761847  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.762082  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.762095  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:36:35.859212  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430995.835523026
	
	I1205 20:36:35.859238  310801 fix.go:216] guest clock: 1733430995.835523026
	I1205 20:36:35.859249  310801 fix.go:229] Guest: 2024-12-05 20:36:35.835523026 +0000 UTC Remote: 2024-12-05 20:36:35.758424054 +0000 UTC m=+147.726301003 (delta=77.098972ms)
	I1205 20:36:35.859274  310801 fix.go:200] guest clock delta is within tolerance: 77.098972ms
	I1205 20:36:35.859282  310801 start.go:83] releasing machines lock for "ha-689539-m03", held for 25.648163663s
	I1205 20:36:35.859307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.859602  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.862387  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.862741  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.862765  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.864694  310801 out.go:177] * Found network options:
	I1205 20:36:35.865935  310801 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.224
	W1205 20:36:35.866955  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.866981  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.867029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867701  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867901  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.868027  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:36:35.868079  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	W1205 20:36:35.868103  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.868132  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.868211  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:36:35.868237  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.870846  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.870889  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871267  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871290  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871306  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871412  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871420  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871631  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871634  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871849  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.871887  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.872025  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.872048  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:36.107172  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:36:36.113768  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:36:36.113852  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:36:36.130072  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:36:36.130105  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:36:36.130199  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:36:36.146210  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:36:36.161285  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:36:36.161367  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:36:36.177064  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:36:36.191545  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:36:36.311400  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:36:36.466588  310801 docker.go:233] disabling docker service ...
	I1205 20:36:36.466685  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:36:36.482756  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:36:36.496706  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:36:36.652172  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:36:36.763760  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:36:36.778126  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:36:36.798464  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:36:36.798550  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.809701  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:36:36.809789  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.821480  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.833057  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.844011  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:36:36.855643  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.866916  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.884661  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.895900  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:36:36.907780  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:36:36.907872  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:36:36.923847  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:36:36.935618  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:37.050068  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:36:37.145134  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:36:37.145210  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:36:37.149942  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:36:37.150018  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:36:37.153774  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:36:37.191365  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:36:37.191476  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.218944  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.247248  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:36:37.248847  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:36:37.250408  310801 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.224
	I1205 20:36:37.251670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:37.254710  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255219  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:37.255255  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255473  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:36:37.259811  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:37.272313  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:36:37.272621  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:37.272965  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.273029  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.288738  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I1205 20:36:37.289258  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.289795  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.289824  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.290243  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.290461  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:36:37.292309  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:37.292619  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.292658  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.308415  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I1205 20:36:37.308950  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.309550  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.309579  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.309955  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.310189  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:37.310389  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.133
	I1205 20:36:37.310408  310801 certs.go:194] generating shared ca certs ...
	I1205 20:36:37.310434  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.310698  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:36:37.310756  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:36:37.310770  310801 certs.go:256] generating profile certs ...
	I1205 20:36:37.310865  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:36:37.310896  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf
	I1205 20:36:37.310913  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:36:37.437144  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf ...
	I1205 20:36:37.437188  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf: {Name:mk0c5897cd83a4093b7a3399e7e587e00b7a5bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437391  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf ...
	I1205 20:36:37.437408  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf: {Name:mk1d8d484e615bf29a9b64d40295dea265ce443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437485  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:36:37.437626  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:36:37.437756  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:36:37.437772  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:36:37.437788  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:36:37.437801  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:36:37.437813  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:36:37.437826  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:36:37.437841  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:36:37.437853  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:36:37.437864  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:36:37.437944  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:36:37.437979  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:36:37.437990  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:36:37.438014  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:36:37.438035  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:36:37.438056  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:36:37.438094  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:37.438120  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:36:37.438137  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:37.438154  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:36:37.438200  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:37.441695  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442183  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:37.442215  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442405  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:37.442622  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:37.442798  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:37.443004  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:37.518292  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:36:37.523367  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:36:37.534644  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:36:37.538903  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:36:37.550288  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:36:37.554639  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:36:37.564857  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:36:37.569390  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:36:37.579805  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:36:37.583826  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:36:37.594623  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:36:37.598518  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:36:37.609622  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:36:37.635232  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:36:37.659198  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:36:37.684613  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:36:37.709156  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 20:36:37.734432  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:36:37.759134  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:36:37.782683  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:36:37.806069  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:36:37.829365  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:36:37.854671  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:36:37.877683  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:36:37.895648  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:36:37.911843  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:36:37.928819  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:36:37.945608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:36:37.961295  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:36:37.977148  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:36:37.993888  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:36:37.999493  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:36:38.010566  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014911  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014995  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.021306  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:36:38.033265  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:36:38.045021  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049577  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049655  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.055689  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:36:38.066840  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:36:38.077747  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082720  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082788  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.088581  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:36:38.099228  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:36:38.103604  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:36:38.103672  310801 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.2 crio true true} ...
	I1205 20:36:38.103798  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:36:38.103838  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:36:38.103889  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:36:38.119642  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:36:38.119740  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:36:38.119812  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.130177  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:36:38.130245  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:36:38.140783  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140794  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140777  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 20:36:38.140857  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140859  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140888  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:38.158074  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.158135  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:36:38.158086  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:36:38.158177  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:36:38.158206  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:36:38.158247  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.186188  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:36:38.186252  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:36:39.060124  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:36:39.071107  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:36:39.088307  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:36:39.105414  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:36:39.123515  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:36:39.128382  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:39.141817  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:39.272056  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:36:39.288864  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:39.289220  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:39.289280  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:39.306323  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1205 20:36:39.306810  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:39.307385  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:39.307405  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:39.307730  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:39.308000  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:39.308176  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:36:39.308320  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:36:39.308347  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:39.311767  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312246  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:39.312274  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312449  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:39.312636  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:39.312767  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:39.312941  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:39.465515  310801 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:39.465587  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I1205 20:37:01.441014  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (21.975379722s)
	I1205 20:37:01.441134  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:37:02.017063  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m03 minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:37:02.122818  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:37:02.233408  310801 start.go:319] duration metric: took 22.92521337s to joinCluster
	I1205 20:37:02.233514  310801 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:02.233929  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:02.235271  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:02.236630  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:02.508423  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:02.527064  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:37:02.527473  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:37:02.527594  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:37:02.527913  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:02.528026  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:02.528040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:02.528051  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:02.528056  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:02.557537  310801 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1205 20:37:03.028186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.028214  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.028223  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.028228  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.031783  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:03.528844  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.528876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.528889  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.528897  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.532449  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.028344  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.028385  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.028391  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.031602  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.528319  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.528352  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.528375  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.528382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.532891  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:04.534060  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:05.028293  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.028328  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.028339  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.028344  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.032338  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:05.529271  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.529311  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.529323  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.529330  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.533411  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:06.028510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.028536  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.028545  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.028550  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.032362  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:06.529188  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.529215  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.529224  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.529229  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.533150  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.029082  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.029108  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.029117  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.029120  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.033089  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.033768  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:07.528440  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.528471  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.528481  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.528485  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.531953  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.028337  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.028382  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.028395  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.028399  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.031906  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.528836  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.528876  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.528881  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.532443  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.028243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.028270  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.028278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.028286  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.031717  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.528911  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.528939  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.528948  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.528953  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.532309  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.532990  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:10.028349  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.028377  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.028386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.028390  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.031930  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:10.528611  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.528635  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.528645  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.528650  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.532023  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.028888  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.028914  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.028923  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.028928  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.032482  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.528496  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.528521  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.528530  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.528534  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.532719  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:11.533217  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:12.028518  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.028550  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.028559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.028562  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.031616  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:12.528837  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.528873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.528876  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.532925  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:13.028348  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.028382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.028385  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.031413  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:13.528247  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.528272  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.528282  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.528289  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.531837  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.028958  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.028983  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.028991  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.028994  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.032387  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.032980  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:14.528243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.528268  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.528276  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.528281  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.533135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:15.029156  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.029181  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.029190  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.029194  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.032772  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:15.528703  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.528727  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.528736  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.528740  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.532084  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.029136  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.029163  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.029172  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.029177  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.032419  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.033160  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:16.528509  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.528535  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.528546  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.528553  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.532163  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.028228  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.028256  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.028265  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.028270  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.031611  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.528262  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.528285  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.528294  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.528298  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.532186  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:18.028484  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.028590  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.028610  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.028619  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.032661  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:18.033298  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:18.528576  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.528603  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.528612  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.528622  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.531605  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.028544  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.028570  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.028579  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.028583  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.031945  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.528716  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.528741  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.528752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.528758  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.532114  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.532722  310801 node_ready.go:49] node "ha-689539-m03" has status "Ready":"True"
	I1205 20:37:19.532746  310801 node_ready.go:38] duration metric: took 17.004806597s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:19.532759  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:19.532848  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:19.532862  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.532873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.532877  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.538433  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:19.545193  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.545310  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:37:19.545322  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.545335  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.545343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.548548  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.549181  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.549197  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.549208  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.549214  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.551745  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.552315  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.552336  310801 pod_ready.go:82] duration metric: took 7.114081ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552347  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552426  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:37:19.552436  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.552443  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.552449  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.555044  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.555688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.555703  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.555714  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.555719  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.558507  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.558964  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.558984  310801 pod_ready.go:82] duration metric: took 6.630508ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.558996  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.559064  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:37:19.559075  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.559086  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.559093  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.561702  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.562346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.562362  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.562373  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.562379  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.564859  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.565270  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.565289  310801 pod_ready.go:82] duration metric: took 6.285995ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565301  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565364  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:37:19.565376  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.565386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.565394  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.567843  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.568351  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:19.568369  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.568381  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.568386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.570730  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.571216  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.571233  310801 pod_ready.go:82] duration metric: took 5.925226ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.571242  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.729689  310801 request.go:632] Waited for 158.375356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729775  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729781  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.729791  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.729798  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.733549  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.929796  310801 request.go:632] Waited for 195.378991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929889  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.929915  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.929920  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.933398  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.934088  310801 pod_ready.go:93] pod "etcd-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.934113  310801 pod_ready.go:82] duration metric: took 362.864968ms for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.934133  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.129093  310801 request.go:632] Waited for 194.866664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129174  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129180  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.129188  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.129192  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.132632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.329356  310801 request.go:632] Waited for 195.935231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329441  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329451  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.329463  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.329476  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.333292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.333939  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.333972  310801 pod_ready.go:82] duration metric: took 399.826342ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.333988  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.529058  310801 request.go:632] Waited for 194.978446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529147  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529166  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.529197  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.529204  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.532832  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.729074  310801 request.go:632] Waited for 195.37241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729139  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729144  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.729153  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.729156  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.733037  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.733831  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.733861  310801 pod_ready.go:82] duration metric: took 399.862982ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.733880  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.928790  310801 request.go:632] Waited for 194.758856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928868  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.928884  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.928894  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.931768  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:21.128920  310801 request.go:632] Waited for 196.30741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129013  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129018  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.129026  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.129030  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.132989  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.133733  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.133764  310801 pod_ready.go:82] duration metric: took 399.87672ms for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.133777  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.329719  310801 request.go:632] Waited for 195.840899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329822  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329829  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.329840  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.329848  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.335472  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:21.529593  310801 request.go:632] Waited for 193.3652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529700  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.529710  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.529721  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.533118  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.533743  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.533773  310801 pod_ready.go:82] duration metric: took 399.989891ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.533788  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.729770  310801 request.go:632] Waited for 195.887392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729855  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729863  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.729871  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.729877  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.733541  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.929705  310801 request.go:632] Waited for 195.397002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929774  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.929787  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.929792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.933945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:21.935117  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.935147  310801 pod_ready.go:82] duration metric: took 401.346008ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.935163  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.129158  310801 request.go:632] Waited for 193.90126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129263  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129281  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.129291  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.129295  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.132774  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.329309  310801 request.go:632] Waited for 195.820597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329371  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329397  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.329412  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.329417  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.332841  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.336218  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.336243  310801 pod_ready.go:82] duration metric: took 401.071031ms for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.336259  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.528770  310801 request.go:632] Waited for 192.411741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528833  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528838  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.528846  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.528850  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.531900  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.729073  310801 request.go:632] Waited for 196.313572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729196  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.729206  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.729212  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.732421  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.733074  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.733099  310801 pod_ready.go:82] duration metric: took 396.833211ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.733111  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.929342  310801 request.go:632] Waited for 196.122694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929410  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929416  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.929425  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.929430  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.932878  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.129758  310801 request.go:632] Waited for 196.113609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129841  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129849  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.129861  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.129874  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.133246  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.133786  310801 pod_ready.go:93] pod "kube-proxy-dktwc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.133805  310801 pod_ready.go:82] duration metric: took 400.688784ms for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.133815  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.329685  310801 request.go:632] Waited for 195.763713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329769  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.329788  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.329795  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.333599  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.528890  310801 request.go:632] Waited for 194.302329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528951  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528955  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.528966  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.528973  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.533840  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:23.534667  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.534691  310801 pod_ready.go:82] duration metric: took 400.868432ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.534705  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.728815  310801 request.go:632] Waited for 194.018306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728888  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.728896  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.728900  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.732452  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.929580  310801 request.go:632] Waited for 196.394135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929653  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929659  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.929667  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.929672  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.933364  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.934147  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.934174  310801 pod_ready.go:82] duration metric: took 399.459723ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.934191  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.129685  310801 request.go:632] Waited for 195.380858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129776  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129789  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.129800  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.129811  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.133305  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.329438  310801 request.go:632] Waited for 195.320628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329517  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329525  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.329544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.329550  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.333177  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.333763  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.333790  310801 pod_ready.go:82] duration metric: took 399.589908ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.333806  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.528866  310801 request.go:632] Waited for 194.951078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528969  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528982  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.528997  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.529004  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.532632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.729734  310801 request.go:632] Waited for 196.398947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729824  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729835  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.729847  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.729855  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.733450  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.734057  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.734085  310801 pod_ready.go:82] duration metric: took 400.271075ms for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.734104  310801 pod_ready.go:39] duration metric: took 5.201330389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:24.734128  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:24.734202  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:24.752010  310801 api_server.go:72] duration metric: took 22.518451158s to wait for apiserver process to appear ...
	I1205 20:37:24.752054  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:24.752086  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:37:24.756435  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:37:24.756538  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:37:24.756551  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.756561  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.756569  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.757464  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:37:24.757533  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:24.757548  310801 api_server.go:131] duration metric: took 5.486922ms to wait for apiserver health ...
	I1205 20:37:24.757559  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:24.928965  310801 request.go:632] Waited for 171.296323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929035  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.929049  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.929054  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.935151  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:24.941691  310801 system_pods.go:59] 24 kube-system pods found
	I1205 20:37:24.941733  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:24.941739  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:24.941742  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:24.941746  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:24.941752  310801 system_pods.go:61] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:24.941756  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:24.941759  310801 system_pods.go:61] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:24.941763  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:24.941766  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:24.941770  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:24.941815  310801 system_pods.go:61] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:24.941833  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:24.941841  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:24.941847  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:24.941854  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:24.941860  310801 system_pods.go:61] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:24.941869  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:24.941875  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:24.941883  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:24.941889  310801 system_pods.go:61] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:24.941915  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:24.941922  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:24.941930  310801 system_pods.go:61] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:24.941939  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:24.941947  310801 system_pods.go:74] duration metric: took 184.37937ms to wait for pod list to return data ...
	I1205 20:37:24.941962  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:25.129425  310801 request.go:632] Waited for 187.3488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129501  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129507  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.129515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.129519  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.133730  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.133919  310801 default_sa.go:45] found service account: "default"
	I1205 20:37:25.133941  310801 default_sa.go:55] duration metric: took 191.967731ms for default service account to be created ...
	I1205 20:37:25.133958  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:25.329286  310801 request.go:632] Waited for 195.223367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329372  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329380  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.329392  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.329406  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.335635  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:25.341932  310801 system_pods.go:86] 24 kube-system pods found
	I1205 20:37:25.341974  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:25.341980  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:25.341986  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:25.341990  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:25.341993  310801 system_pods.go:89] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:25.341996  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:25.342000  310801 system_pods.go:89] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:25.342003  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:25.342008  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:25.342011  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:25.342015  310801 system_pods.go:89] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:25.342018  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:25.342022  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:25.342025  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:25.342029  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:25.342035  310801 system_pods.go:89] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:25.342039  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:25.342043  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:25.342047  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:25.342053  310801 system_pods.go:89] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:25.342056  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:25.342059  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:25.342063  310801 system_pods.go:89] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:25.342067  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:25.342077  310801 system_pods.go:126] duration metric: took 208.11212ms to wait for k8s-apps to be running ...
	I1205 20:37:25.342087  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:25.342141  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:25.359925  310801 system_svc.go:56] duration metric: took 17.820163ms WaitForService to wait for kubelet
	I1205 20:37:25.359969  310801 kubeadm.go:582] duration metric: took 23.126420152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:25.359998  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:25.529464  310801 request.go:632] Waited for 169.34708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529531  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529543  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.529553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.529558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.534297  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.535249  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535281  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535294  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535298  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535302  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535306  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535318  310801 node_conditions.go:105] duration metric: took 175.313275ms to run NodePressure ...
	I1205 20:37:25.535339  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:37:25.535367  310801 start.go:255] writing updated cluster config ...
	I1205 20:37:25.535725  310801 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:25.590118  310801 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:25.592310  310801 out.go:177] * Done! kubectl is now configured to use "ha-689539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.322135199Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431272322112276,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c3361df-e46c-437f-bfab-d730f9d7531f name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.322677467Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b64ab095-f40c-4c90-95d6-ff6e146717d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.322746934Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b64ab095-f40c-4c90-95d6-ff6e146717d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.322979713Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b64ab095-f40c-4c90-95d6-ff6e146717d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.369173695Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=2e5ce21b-44d0-4dd7-ae4f-eee2b69f02bb name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.369325820Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=2e5ce21b-44d0-4dd7-ae4f-eee2b69f02bb name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.370580493Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1fc1c698-09a9-4754-92ee-9a431f6f470c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.371029077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431272371002954,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1fc1c698-09a9-4754-92ee-9a431f6f470c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.371527892Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e12d4ff1-7a50-4a5e-9321-e991997a8b6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.371604343Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e12d4ff1-7a50-4a5e-9321-e991997a8b6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.371921366Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e12d4ff1-7a50-4a5e-9321-e991997a8b6c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.411507889Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8b37aea4-288f-40cf-8f8b-8460166c916f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.411586118Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8b37aea4-288f-40cf-8f8b-8460166c916f name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.412804993Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9243d699-2352-4086-8cec-1f3db79edeea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.413326397Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431272413300085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9243d699-2352-4086-8cec-1f3db79edeea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.413839919Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2ae42aa-c439-4c26-a21b-fc6746b269ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.413898979Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2ae42aa-c439-4c26-a21b-fc6746b269ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.414139416Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2ae42aa-c439-4c26-a21b-fc6746b269ce name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.452963023Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4daf2f2e-1f3d-4425-b2ea-0c74ef48199b name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.453039752Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4daf2f2e-1f3d-4425-b2ea-0c74ef48199b name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.454037119Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cea7cf41-b23b-4186-91de-bfbff4a45595 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.454565968Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431272454537989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cea7cf41-b23b-4186-91de-bfbff4a45595 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.455119672Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cd67ccb6-464f-40ad-a2c5-a51b6c20cc63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.455174572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd67ccb6-464f-40ad-a2c5-a51b6c20cc63 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:12 ha-689539 crio[658]: time="2024-12-05 20:41:12.455444429Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd67ccb6-464f-40ad-a2c5-a51b6c20cc63 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	77e0f8ba49070       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a35c5864db38       busybox-7dff88458-qjqvr
	05a6cfcd7e9ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   984c3b3f8fe03       coredns-7c65d6cfc9-4ln9l
	c6007ba446b77       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a344cd0e9a251       coredns-7c65d6cfc9-6qhhf
	74e8c78df0a6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d7a154f9d8020       storage-provisioner
	0809642e9449b       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   faeac762b1689       kindnet-62qw6
	0a16a5003f863       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   6bc6d79587a62       kube-proxy-9tslx
	4431afbd69d99       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   ae658c6069b44       kube-vip-ha-689539
	1e9238618cdfe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   110f95e5235df       etcd-ha-689539
	2033f56968a9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6058ddd3ee58       kube-scheduler-ha-689539
	cd2211f15ae3c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   f650305b876ca       kube-apiserver-ha-689539
	4a056592a0f93       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   6d5d1a1329844       kube-controller-manager-ha-689539
	
	
	==> coredns [05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc] <==
	[INFO] 10.244.0.4:44188 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002182194s
	[INFO] 10.244.1.2:41292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169551s
	[INFO] 10.244.1.2:38453 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003584311s
	[INFO] 10.244.1.2:36084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201777s
	[INFO] 10.244.1.2:49408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133503s
	[INFO] 10.244.2.2:51533 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117849s
	[INFO] 10.244.2.2:34176 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018539s
	[INFO] 10.244.2.2:43670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178861s
	[INFO] 10.244.2.2:56974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148401s
	[INFO] 10.244.0.4:48841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170335s
	[INFO] 10.244.0.4:43111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409238s
	[INFO] 10.244.0.4:36893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093314s
	[INFO] 10.244.0.4:50555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104324s
	[INFO] 10.244.1.2:43568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116735s
	[INFO] 10.244.1.2:44480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066571s
	[INFO] 10.244.1.2:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058674s
	[INFO] 10.244.2.2:49472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121084s
	[INFO] 10.244.0.4:57046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160079s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119738s
	[INFO] 10.244.1.2:37203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178276s
	[INFO] 10.244.1.2:59196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213381s
	[INFO] 10.244.1.2:41969 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159543s
	[INFO] 10.244.1.2:60294 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120046s
	[INFO] 10.244.2.2:42519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177647s
	[INFO] 10.244.0.4:60229 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056377s
	
	
	==> coredns [c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a] <==
	[INFO] 10.244.0.4:55355 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000054352s
	[INFO] 10.244.1.2:33933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161165s
	[INFO] 10.244.1.2:37174 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884442s
	[INFO] 10.244.1.2:41634 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152882s
	[INFO] 10.244.1.2:60548 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176047s
	[INFO] 10.244.2.2:32947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146675s
	[INFO] 10.244.2.2:60319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949836s
	[INFO] 10.244.2.2:48727 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001337037s
	[INFO] 10.244.2.2:56733 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149582s
	[INFO] 10.244.0.4:58646 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001891441s
	[INFO] 10.244.0.4:55352 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164932s
	[INFO] 10.244.0.4:54745 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100872s
	[INFO] 10.244.0.4:51217 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122097s
	[INFO] 10.244.1.2:52959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137256s
	[INFO] 10.244.2.2:52934 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147111s
	[INFO] 10.244.2.2:34173 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119001s
	[INFO] 10.244.2.2:41909 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126707s
	[INFO] 10.244.0.4:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120087s
	[INFO] 10.244.0.4:35647 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218624s
	[INFO] 10.244.2.2:51797 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211308s
	[INFO] 10.244.2.2:38193 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207361s
	[INFO] 10.244.2.2:55117 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135379s
	[INFO] 10.244.0.4:46265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114618s
	[INFO] 10.244.0.4:43082 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145713s
	[INFO] 10.244.0.4:59763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071668s
	
	
	==> describe nodes <==
	Name:               ha-689539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-689539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fcfe17cf29247c89ef6261408cdec57
	  System UUID:                3fcfe17c-f292-47c8-9ef6-261408cdec57
	  Boot ID:                    0967c504-1cf1-4d64-84b3-abc762e82552
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qjqvr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 coredns-7c65d6cfc9-4ln9l             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 coredns-7c65d6cfc9-6qhhf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m19s
	  kube-system                 etcd-ha-689539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m23s
	  kube-system                 kindnet-62qw6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-689539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-controller-manager-ha-689539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-proxy-9tslx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-689539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 kube-vip-ha-689539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m23s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m18s  kube-proxy       
	  Normal  Starting                 6m23s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m23s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m23s  kubelet          Node ha-689539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m23s  kubelet          Node ha-689539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m23s  kubelet          Node ha-689539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m19s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  NodeReady                6m2s   kubelet          Node ha-689539 status is now: NodeReady
	  Normal  RegisteredNode           5m19s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  RegisteredNode           4m6s   node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	
	
	Name:               ha-689539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:38:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-689539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2527423e09b7455fb49f08b5007d8aaf
	  System UUID:                2527423e-09b7-455f-b49f-08b5007d8aaf
	  Boot ID:                    693fb661-afc0-4a4b-8d66-7434b8ba3be0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7ss94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-689539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m26s
	  kube-system                 kindnet-b7bf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m27s
	  kube-system                 kube-apiserver-ha-689539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m26s
	  kube-system                 kube-controller-manager-ha-689539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m18s
	  kube-system                 kube-proxy-x2grl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m27s
	  kube-system                 kube-scheduler-ha-689539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m21s
	  kube-system                 kube-vip-ha-689539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m28s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m27s (x8 over 5m28s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m27s (x8 over 5m28s)  kubelet          Node ha-689539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m27s (x7 over 5m28s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m24s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           5m19s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  NodeNotReady             111s                   node-controller  Node ha-689539-m02 status is now: NodeNotReady
	
	
	Name:               ha-689539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-689539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23c133dbe3f244679269ca86c6b2111d
	  System UUID:                23c133db-e3f2-4467-9269-ca86c6b2111d
	  Boot ID:                    72ade07d-4013-4096-9862-81be930c4b6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ns455                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m46s
	  kube-system                 etcd-ha-689539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m12s
	  kube-system                 kindnet-8kgs2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m14s
	  kube-system                 kube-apiserver-ha-689539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m12s
	  kube-system                 kube-controller-manager-ha-689539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-proxy-dktwc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-scheduler-ha-689539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-vip-ha-689539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m14s (x8 over 4m14s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s (x8 over 4m14s)  kubelet          Node ha-689539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s (x7 over 4m14s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m14s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m9s                   node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m6s                   node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	
	
	Name:               ha-689539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_38_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    ha-689539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d82a84b2609b470c8ddc16781015ee6d
	  System UUID:                d82a84b2-609b-470c-8ddc-16781015ee6d
	  Boot ID:                    c6aff0b9-eb25-4035-add5-dcc47c5c8348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9xbpp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m7s
	  kube-system                 kube-proxy-kpbrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m2s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  3m7s (x2 over 3m8s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m7s (x2 over 3m8s)  kubelet          Node ha-689539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m7s (x2 over 3m8s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m6s                 node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m4s                 node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  NodeReady                2m47s                kubelet          Node ha-689539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039465] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.016771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.614002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.712547] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.063478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058841] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.182620] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.134116] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.286058] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.983127] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.083666] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.057216] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.189676] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.088639] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.119203] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.279281] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42] <==
	{"level":"warn","ts":"2024-12-05T20:41:12.690884Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.709876Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.718148Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.722701Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.734782Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.741817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.749495Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.753740Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.757066Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.765006Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.771122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.777211Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.781119Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.785364Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.791201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.792530Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.799851Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.807307Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.811693Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.815607Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.819848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.828403Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.836441Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.891914Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:12.896012Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:41:12 up 7 min,  0 users,  load average: 0.24, 0.24, 0.11
	Linux ha-689539 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61] <==
	I1205 20:40:39.975671       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:49.971686       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:49.971811       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:49.972022       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:49.972032       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:40:49.972125       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:49.972132       1 main.go:301] handling current node
	I1205 20:40:49.972143       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:49.972147       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972467       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:59.972574       1 main.go:301] handling current node
	I1205 20:40:59.972604       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:59.972621       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972884       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:59.972920       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:59.973088       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:59.973124       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:41:09.973378       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:41:09.973428       1 main.go:301] handling current node
	I1205 20:41:09.973445       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:41:09.973450       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:41:09.973693       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:41:09.973706       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:41:09.973839       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:41:09.973846       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19] <==
	W1205 20:34:48.005731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220]
	I1205 20:34:48.006729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:34:48.014987       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:34:48.223693       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:34:49.561495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:34:49.580677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:34:49.727059       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:34:53.679365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:34:53.876376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1205 20:37:30.985923       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44596: use of closed network connection
	E1205 20:37:31.179622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44600: use of closed network connection
	E1205 20:37:31.382888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44610: use of closed network connection
	E1205 20:37:31.582068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44622: use of closed network connection
	E1205 20:37:31.774198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44652: use of closed network connection
	E1205 20:37:31.958030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44666: use of closed network connection
	E1205 20:37:32.140428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44686: use of closed network connection
	E1205 20:37:32.322775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44704: use of closed network connection
	E1205 20:37:32.515908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44718: use of closed network connection
	E1205 20:37:32.837161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44756: use of closed network connection
	E1205 20:37:33.022723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44776: use of closed network connection
	E1205 20:37:33.209590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44790: use of closed network connection
	E1205 20:37:33.392904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44808: use of closed network connection
	E1205 20:37:33.581589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44830: use of closed network connection
	E1205 20:37:33.765728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44852: use of closed network connection
	W1205 20:38:58.016885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.220]
	
	
	==> kube-controller-manager [4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2] <==
	I1205 20:38:05.497632       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-689539-m04" podCIDRs=["10.244.3.0/24"]
	I1205 20:38:05.497693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.497786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.524265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.322551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.681995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.924972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.069639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.145190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.229546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.230026       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-689539-m04"
	I1205 20:38:08.272217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:15.550194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:38:25.164347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:26.915918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:36.091312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:39:21.941441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.941592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:39:21.962901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.988464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.390336ms"
	I1205 20:39:21.988772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="153.307µs"
	I1205 20:39:23.353917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:27.137479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	
	
	==> kube-proxy [0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:34:54.543864       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:34:54.553756       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	E1205 20:34:54.553891       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:34:54.586394       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:34:54.586517       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:34:54.586562       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:34:54.589547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:34:54.589875       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:34:54.589968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:34:54.592476       1 config.go:199] "Starting service config controller"
	I1205 20:34:54.594797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:34:54.592516       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:34:54.594853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:34:54.600348       1 config.go:328] "Starting node config controller"
	I1205 20:34:54.601332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:34:54.695425       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:34:54.695636       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:34:54.701955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668] <==
	E1205 20:34:47.293214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.324868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:34:47.324938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.340705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:34:47.340848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.360711       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:34:47.360829       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:34:47.402644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:34:47.402751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.409130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:34:47.409228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.580992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:34:47.581091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:34:49.941328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 20:37:26.487849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.487974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c47c5104-83dc-428d-8ded-5175eff6643c(default/busybox-7dff88458-ns455) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ns455"
	E1205 20:37:26.488011       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" pod="default/busybox-7dff88458-ns455"
	I1205 20:37:26.488039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.529460       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:37:26.531731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" pod="default/busybox-7dff88458-qjqvr"
	I1205 20:37:26.532951       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:38:05.558984       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	E1205 20:38:05.565872       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d09bad-5a47-45ec-b467-0231a40ad9f0(kube-system/kindnet-mqzp5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mqzp5"
	E1205 20:38:05.566103       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" pod="kube-system/kindnet-mqzp5"
	I1205 20:38:05.566218       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	
	
	==> kubelet <==
	Dec 05 20:39:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:39:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801882    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801906    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.803793    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.804270    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807394    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807450    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811009    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811103    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812356    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812422    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814301    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814613    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.759293    1297 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816382    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816591    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821073    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821410    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823458    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823549    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (5.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (6.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 node start m02 -v=7 --alsologtostderr
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr: (3.920673385s)
ha_test.go:437: status says not all three control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:440: status says not all four hosts are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:443: status says not all four kubelets are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:446: status says not all three apiservers are running: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:450: (dbg) Run:  kubectl get nodes
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (1.377558072s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m03_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-689539 node start m02 -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:34:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:34:08.074114  310801 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:34:08.074261  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074272  310801 out.go:358] Setting ErrFile to fd 2...
	I1205 20:34:08.074277  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074494  310801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:34:08.075118  310801 out.go:352] Setting JSON to false
	I1205 20:34:08.076226  310801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11796,"bootTime":1733419052,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:34:08.076305  310801 start.go:139] virtualization: kvm guest
	I1205 20:34:08.078657  310801 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:34:08.080623  310801 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:34:08.080628  310801 notify.go:220] Checking for updates...
	I1205 20:34:08.083473  310801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:34:08.084883  310801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:08.086219  310801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.087594  310801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:34:08.088859  310801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:34:08.090289  310801 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:34:08.128174  310801 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:34:08.129457  310801 start.go:297] selected driver: kvm2
	I1205 20:34:08.129474  310801 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:34:08.129492  310801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:34:08.130313  310801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.130391  310801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:34:08.148061  310801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:34:08.148119  310801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:34:08.148394  310801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:34:08.148426  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:08.148467  310801 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 20:34:08.148479  310801 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:34:08.148546  310801 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:08.148670  310801 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.150579  310801 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:34:08.152101  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:08.152144  310801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:34:08.152158  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:08.152281  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:08.152296  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:08.152605  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:08.152651  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json: {Name:mk27baab499187c123d1f411d3400f014a73dd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:08.152842  310801 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:08.152881  310801 start.go:364] duration metric: took 21.06µs to acquireMachinesLock for "ha-689539"
	I1205 20:34:08.152908  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:08.152972  310801 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:34:08.154751  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:08.154908  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:08.154972  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:08.170934  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I1205 20:34:08.171495  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:08.172063  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:08.172087  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:08.172451  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:08.172674  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:08.172837  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:08.172996  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:08.173045  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:08.173086  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:08.173121  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173139  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173198  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:08.173225  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173243  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173268  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:08.173282  310801 main.go:141] libmachine: (ha-689539) Calling .PreCreateCheck
	I1205 20:34:08.173629  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:08.174111  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:08.174129  310801 main.go:141] libmachine: (ha-689539) Calling .Create
	I1205 20:34:08.174265  310801 main.go:141] libmachine: (ha-689539) Creating KVM machine...
	I1205 20:34:08.175744  310801 main.go:141] libmachine: (ha-689539) DBG | found existing default KVM network
	I1205 20:34:08.176445  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.176315  310824 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I1205 20:34:08.176491  310801 main.go:141] libmachine: (ha-689539) DBG | created network xml: 
	I1205 20:34:08.176507  310801 main.go:141] libmachine: (ha-689539) DBG | <network>
	I1205 20:34:08.176530  310801 main.go:141] libmachine: (ha-689539) DBG |   <name>mk-ha-689539</name>
	I1205 20:34:08.176545  310801 main.go:141] libmachine: (ha-689539) DBG |   <dns enable='no'/>
	I1205 20:34:08.176564  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176591  310801 main.go:141] libmachine: (ha-689539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:34:08.176606  310801 main.go:141] libmachine: (ha-689539) DBG |     <dhcp>
	I1205 20:34:08.176611  310801 main.go:141] libmachine: (ha-689539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:34:08.176616  310801 main.go:141] libmachine: (ha-689539) DBG |     </dhcp>
	I1205 20:34:08.176621  310801 main.go:141] libmachine: (ha-689539) DBG |   </ip>
	I1205 20:34:08.176666  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176693  310801 main.go:141] libmachine: (ha-689539) DBG | </network>
	I1205 20:34:08.176707  310801 main.go:141] libmachine: (ha-689539) DBG | 
	I1205 20:34:08.181749  310801 main.go:141] libmachine: (ha-689539) DBG | trying to create private KVM network mk-ha-689539 192.168.39.0/24...
	I1205 20:34:08.259729  310801 main.go:141] libmachine: (ha-689539) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.259779  310801 main.go:141] libmachine: (ha-689539) DBG | private KVM network mk-ha-689539 192.168.39.0/24 created
	I1205 20:34:08.259792  310801 main.go:141] libmachine: (ha-689539) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:08.259831  310801 main.go:141] libmachine: (ha-689539) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:08.259902  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.259565  310824 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.570701  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.570509  310824 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa...
	I1205 20:34:08.656946  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656740  310824 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk...
	I1205 20:34:08.656979  310801 main.go:141] libmachine: (ha-689539) DBG | Writing magic tar header
	I1205 20:34:08.656999  310801 main.go:141] libmachine: (ha-689539) DBG | Writing SSH key tar header
	I1205 20:34:08.657012  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656919  310824 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.657032  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539
	I1205 20:34:08.657155  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 (perms=drwx------)
	I1205 20:34:08.657196  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:08.657214  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:08.657237  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:08.657251  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:08.657266  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.657283  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:08.657297  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:08.657313  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:08.657327  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home
	I1205 20:34:08.657340  310801 main.go:141] libmachine: (ha-689539) DBG | Skipping /home - not owner
	I1205 20:34:08.657354  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:08.657370  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:08.657383  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:08.658677  310801 main.go:141] libmachine: (ha-689539) define libvirt domain using xml: 
	I1205 20:34:08.658706  310801 main.go:141] libmachine: (ha-689539) <domain type='kvm'>
	I1205 20:34:08.658718  310801 main.go:141] libmachine: (ha-689539)   <name>ha-689539</name>
	I1205 20:34:08.658725  310801 main.go:141] libmachine: (ha-689539)   <memory unit='MiB'>2200</memory>
	I1205 20:34:08.658735  310801 main.go:141] libmachine: (ha-689539)   <vcpu>2</vcpu>
	I1205 20:34:08.658745  310801 main.go:141] libmachine: (ha-689539)   <features>
	I1205 20:34:08.658752  310801 main.go:141] libmachine: (ha-689539)     <acpi/>
	I1205 20:34:08.658759  310801 main.go:141] libmachine: (ha-689539)     <apic/>
	I1205 20:34:08.658767  310801 main.go:141] libmachine: (ha-689539)     <pae/>
	I1205 20:34:08.658787  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.658823  310801 main.go:141] libmachine: (ha-689539)   </features>
	I1205 20:34:08.658849  310801 main.go:141] libmachine: (ha-689539)   <cpu mode='host-passthrough'>
	I1205 20:34:08.658858  310801 main.go:141] libmachine: (ha-689539)   
	I1205 20:34:08.658863  310801 main.go:141] libmachine: (ha-689539)   </cpu>
	I1205 20:34:08.658869  310801 main.go:141] libmachine: (ha-689539)   <os>
	I1205 20:34:08.658874  310801 main.go:141] libmachine: (ha-689539)     <type>hvm</type>
	I1205 20:34:08.658880  310801 main.go:141] libmachine: (ha-689539)     <boot dev='cdrom'/>
	I1205 20:34:08.658885  310801 main.go:141] libmachine: (ha-689539)     <boot dev='hd'/>
	I1205 20:34:08.658892  310801 main.go:141] libmachine: (ha-689539)     <bootmenu enable='no'/>
	I1205 20:34:08.658896  310801 main.go:141] libmachine: (ha-689539)   </os>
	I1205 20:34:08.658902  310801 main.go:141] libmachine: (ha-689539)   <devices>
	I1205 20:34:08.658909  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='cdrom'>
	I1205 20:34:08.658920  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/boot2docker.iso'/>
	I1205 20:34:08.658932  310801 main.go:141] libmachine: (ha-689539)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:08.658940  310801 main.go:141] libmachine: (ha-689539)       <readonly/>
	I1205 20:34:08.658954  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.658974  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='disk'>
	I1205 20:34:08.658987  310801 main.go:141] libmachine: (ha-689539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:08.659004  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk'/>
	I1205 20:34:08.659016  310801 main.go:141] libmachine: (ha-689539)       <target dev='hda' bus='virtio'/>
	I1205 20:34:08.659054  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.659076  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659087  310801 main.go:141] libmachine: (ha-689539)       <source network='mk-ha-689539'/>
	I1205 20:34:08.659094  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659106  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659117  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659126  310801 main.go:141] libmachine: (ha-689539)       <source network='default'/>
	I1205 20:34:08.659140  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659151  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659160  310801 main.go:141] libmachine: (ha-689539)     <serial type='pty'>
	I1205 20:34:08.659167  310801 main.go:141] libmachine: (ha-689539)       <target port='0'/>
	I1205 20:34:08.659176  310801 main.go:141] libmachine: (ha-689539)     </serial>
	I1205 20:34:08.659185  310801 main.go:141] libmachine: (ha-689539)     <console type='pty'>
	I1205 20:34:08.659196  310801 main.go:141] libmachine: (ha-689539)       <target type='serial' port='0'/>
	I1205 20:34:08.659214  310801 main.go:141] libmachine: (ha-689539)     </console>
	I1205 20:34:08.659224  310801 main.go:141] libmachine: (ha-689539)     <rng model='virtio'>
	I1205 20:34:08.659233  310801 main.go:141] libmachine: (ha-689539)       <backend model='random'>/dev/random</backend>
	I1205 20:34:08.659242  310801 main.go:141] libmachine: (ha-689539)     </rng>
	I1205 20:34:08.659248  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659252  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659260  310801 main.go:141] libmachine: (ha-689539)   </devices>
	I1205 20:34:08.659270  310801 main.go:141] libmachine: (ha-689539) </domain>
	I1205 20:34:08.659282  310801 main.go:141] libmachine: (ha-689539) 
	I1205 20:34:08.664073  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:a3:09:de in network default
	I1205 20:34:08.664657  310801 main.go:141] libmachine: (ha-689539) Ensuring networks are active...
	I1205 20:34:08.664680  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:08.665393  310801 main.go:141] libmachine: (ha-689539) Ensuring network default is active
	I1205 20:34:08.665790  310801 main.go:141] libmachine: (ha-689539) Ensuring network mk-ha-689539 is active
	I1205 20:34:08.666343  310801 main.go:141] libmachine: (ha-689539) Getting domain xml...
	I1205 20:34:08.667190  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:09.889755  310801 main.go:141] libmachine: (ha-689539) Waiting to get IP...
	I1205 20:34:09.890610  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:09.890981  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:09.891034  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:09.890969  310824 retry.go:31] will retry after 284.885869ms: waiting for machine to come up
	I1205 20:34:10.177621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.178156  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.178184  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.178109  310824 retry.go:31] will retry after 378.211833ms: waiting for machine to come up
	I1205 20:34:10.557655  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.558178  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.558212  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.558123  310824 retry.go:31] will retry after 473.788163ms: waiting for machine to come up
	I1205 20:34:11.033830  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.034246  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.034277  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.034195  310824 retry.go:31] will retry after 418.138315ms: waiting for machine to come up
	I1205 20:34:11.453849  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.454287  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.454318  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.454229  310824 retry.go:31] will retry after 720.041954ms: waiting for machine to come up
	I1205 20:34:12.176162  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.176610  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.176635  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.176551  310824 retry.go:31] will retry after 769.230458ms: waiting for machine to come up
	I1205 20:34:12.947323  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.947645  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.947682  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.947615  310824 retry.go:31] will retry after 799.111179ms: waiting for machine to come up
	I1205 20:34:13.748171  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:13.748640  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:13.748669  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:13.748592  310824 retry.go:31] will retry after 1.052951937s: waiting for machine to come up
	I1205 20:34:14.802913  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:14.803309  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:14.803340  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:14.803262  310824 retry.go:31] will retry after 1.685899285s: waiting for machine to come up
	I1205 20:34:16.491286  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:16.491828  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:16.491858  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:16.491779  310824 retry.go:31] will retry after 1.722453601s: waiting for machine to come up
	I1205 20:34:18.215846  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:18.216281  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:18.216316  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:18.216229  310824 retry.go:31] will retry after 1.847118783s: waiting for machine to come up
	I1205 20:34:20.066408  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:20.066971  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:20.067002  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:20.066922  310824 retry.go:31] will retry after 2.216585531s: waiting for machine to come up
	I1205 20:34:22.284845  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:22.285380  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:22.285409  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:22.285296  310824 retry.go:31] will retry after 4.35742756s: waiting for machine to come up
	I1205 20:34:26.646498  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:26.646898  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:26.646925  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:26.646863  310824 retry.go:31] will retry after 4.830110521s: waiting for machine to come up
	I1205 20:34:31.481950  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.482551  310801 main.go:141] libmachine: (ha-689539) Found IP for machine: 192.168.39.220
	I1205 20:34:31.482584  310801 main.go:141] libmachine: (ha-689539) Reserving static IP address...
	I1205 20:34:31.482599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has current primary IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.483029  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find host DHCP lease matching {name: "ha-689539", mac: "52:54:00:92:19:fb", ip: "192.168.39.220"} in network mk-ha-689539
	I1205 20:34:31.565523  310801 main.go:141] libmachine: (ha-689539) Reserved static IP address: 192.168.39.220
	I1205 20:34:31.565552  310801 main.go:141] libmachine: (ha-689539) Waiting for SSH to be available...
	I1205 20:34:31.565561  310801 main.go:141] libmachine: (ha-689539) DBG | Getting to WaitForSSH function...
	I1205 20:34:31.568330  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568827  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.568862  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568958  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH client type: external
	I1205 20:34:31.568991  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa (-rw-------)
	I1205 20:34:31.569027  310801 main.go:141] libmachine: (ha-689539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:34:31.569037  310801 main.go:141] libmachine: (ha-689539) DBG | About to run SSH command:
	I1205 20:34:31.569050  310801 main.go:141] libmachine: (ha-689539) DBG | exit 0
	I1205 20:34:31.694133  310801 main.go:141] libmachine: (ha-689539) DBG | SSH cmd err, output: <nil>: 
	I1205 20:34:31.694455  310801 main.go:141] libmachine: (ha-689539) KVM machine creation complete!
	I1205 20:34:31.694719  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:31.695354  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695562  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695749  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:34:31.695765  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:31.697139  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:34:31.697166  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:34:31.697171  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:34:31.697176  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.699900  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700272  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.700328  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700454  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.700642  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700807  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700983  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.701155  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.701416  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.701430  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:34:31.797327  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:31.797354  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:34:31.797363  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.800489  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.800822  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.800853  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.801025  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.801240  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801464  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801591  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.801777  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.801991  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.802002  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:34:31.902674  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:34:31.902768  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:34:31.902779  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:34:31.902787  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903088  310801 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:34:31.903116  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903428  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.906237  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906571  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.906599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906752  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.906940  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907099  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907232  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.907446  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.907634  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.907655  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:34:32.020236  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:34:32.020265  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.023604  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.023912  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.023942  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.024133  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.024345  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024501  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024686  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.024863  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.025085  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.025111  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:34:32.131661  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:32.131696  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:34:32.131742  310801 buildroot.go:174] setting up certificates
	I1205 20:34:32.131755  310801 provision.go:84] configureAuth start
	I1205 20:34:32.131768  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:32.132088  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.135389  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.135787  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.135825  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.136069  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.138588  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.138916  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.138949  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.139086  310801 provision.go:143] copyHostCerts
	I1205 20:34:32.139123  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139178  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:34:32.139206  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139295  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:34:32.139433  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139460  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:34:32.139468  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139515  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:34:32.139597  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139626  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:34:32.139634  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139671  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:34:32.139758  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:34:32.367430  310801 provision.go:177] copyRemoteCerts
	I1205 20:34:32.367531  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:34:32.367565  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.370702  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371025  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.371063  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371206  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.371413  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.371586  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.371717  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.452327  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:34:32.452426  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:34:32.476869  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:34:32.476958  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:34:32.501389  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:34:32.501501  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:34:32.525226  310801 provision.go:87] duration metric: took 393.452946ms to configureAuth
	I1205 20:34:32.525267  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:34:32.525488  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:32.525609  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.528470  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.528833  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.528864  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.529057  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.529285  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529497  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529678  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.529839  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.530046  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.530066  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:34:32.733723  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:34:32.733755  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:34:32.733816  310801 main.go:141] libmachine: (ha-689539) Calling .GetURL
	I1205 20:34:32.735231  310801 main.go:141] libmachine: (ha-689539) DBG | Using libvirt version 6000000
	I1205 20:34:32.737329  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737769  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.737804  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737993  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:34:32.738008  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:34:32.738015  310801 client.go:171] duration metric: took 24.564959064s to LocalClient.Create
	I1205 20:34:32.738046  310801 start.go:167] duration metric: took 24.565052554s to libmachine.API.Create "ha-689539"
	I1205 20:34:32.738061  310801 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:34:32.738073  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:34:32.738096  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.738400  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:34:32.738433  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.740621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.740891  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.740921  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.741034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.741256  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.741431  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.741595  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.820810  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:34:32.825193  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:34:32.825227  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:34:32.825326  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:34:32.825428  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:34:32.825442  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:34:32.825556  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:34:32.835549  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:32.859405  310801 start.go:296] duration metric: took 121.327589ms for postStartSetup
	I1205 20:34:32.859464  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:32.860144  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.862916  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863271  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.863303  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863582  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:32.863831  310801 start.go:128] duration metric: took 24.710845565s to createHost
	I1205 20:34:32.863871  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.866291  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866627  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.866656  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866902  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.867141  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867419  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867570  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.867744  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.867965  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.867993  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:34:32.966710  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430872.933221119
	
	I1205 20:34:32.966748  310801 fix.go:216] guest clock: 1733430872.933221119
	I1205 20:34:32.966760  310801 fix.go:229] Guest: 2024-12-05 20:34:32.933221119 +0000 UTC Remote: 2024-12-05 20:34:32.863851557 +0000 UTC m=+24.831728555 (delta=69.369562ms)
	I1205 20:34:32.966789  310801 fix.go:200] guest clock delta is within tolerance: 69.369562ms
	I1205 20:34:32.966794  310801 start.go:83] releasing machines lock for "ha-689539", held for 24.813901478s
	I1205 20:34:32.966815  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.967103  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.970285  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970747  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.970797  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970954  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971526  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971766  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971872  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:34:32.971926  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.972023  310801 ssh_runner.go:195] Run: cat /version.json
	I1205 20:34:32.972052  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.975300  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975606  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975666  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.975696  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975901  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976142  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976160  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.976211  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.976432  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976440  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.976647  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.976668  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976855  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.977003  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:33.059386  310801 ssh_runner.go:195] Run: systemctl --version
	I1205 20:34:33.082247  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:34:33.243513  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:34:33.249633  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:34:33.249718  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:34:33.266578  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:34:33.266607  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:34:33.266691  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:34:33.282457  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:34:33.296831  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:34:33.296976  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:34:33.310872  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:34:33.324245  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:34:33.436767  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:34:33.589248  310801 docker.go:233] disabling docker service ...
	I1205 20:34:33.589369  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:34:33.604397  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:34:33.617678  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:34:33.755936  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:34:33.876879  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:34:33.890218  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:34:33.907910  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:34:33.907992  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.918057  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:34:33.918138  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.928622  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.938873  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.949059  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:34:33.959639  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.970025  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.986937  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.997151  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:34:34.006323  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:34:34.006391  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:34:34.019434  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:34:34.029027  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:34.156535  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:34:34.246656  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:34:34.246735  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:34:34.251273  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:34:34.251340  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:34:34.254861  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:34:34.290093  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:34:34.290181  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.319140  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.349724  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:34:34.351134  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:34.354155  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354477  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:34.354499  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354753  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:34:34.358724  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:34.371098  310801 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:34:34.371240  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:34.371296  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:34.405312  310801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:34:34.405419  310801 ssh_runner.go:195] Run: which lz4
	I1205 20:34:34.409438  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:34:34.409558  310801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:34:34.413636  310801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:34:34.413680  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:34:35.688964  310801 crio.go:462] duration metric: took 1.279440398s to copy over tarball
	I1205 20:34:35.689045  310801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:34:37.772729  310801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083628711s)
	I1205 20:34:37.772773  310801 crio.go:469] duration metric: took 2.083775707s to extract the tarball
	I1205 20:34:37.772784  310801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:34:37.810322  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:37.853195  310801 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:34:37.853229  310801 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:34:37.853239  310801 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:34:37.853389  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:34:37.853483  310801 ssh_runner.go:195] Run: crio config
	I1205 20:34:37.904941  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:37.904967  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:37.904981  310801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:34:37.905015  310801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:34:37.905154  310801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:34:37.905183  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:34:37.905229  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:34:37.920877  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:34:37.921012  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:34:37.921087  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:34:37.930861  310801 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:34:37.930952  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:34:37.940283  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:34:37.956877  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:34:37.973504  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:34:37.990145  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 20:34:38.006265  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:34:38.010189  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:38.022257  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:38.140067  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:34:38.157890  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:34:38.157932  310801 certs.go:194] generating shared ca certs ...
	I1205 20:34:38.157956  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.158149  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:34:38.158208  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:34:38.158222  310801 certs.go:256] generating profile certs ...
	I1205 20:34:38.158295  310801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:34:38.158314  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt with IP's: []
	I1205 20:34:38.310974  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt ...
	I1205 20:34:38.311018  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt: {Name:mkf3aecb8b9ad227608c6977c2ad30cfc55949b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311241  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key ...
	I1205 20:34:38.311266  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key: {Name:mkfab3a0d79e1baa864757b84edfb7968d976df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311382  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772
	I1205 20:34:38.311402  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I1205 20:34:38.414671  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 ...
	I1205 20:34:38.414714  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772: {Name:mkc29737ec8270e2af482fa3e0afb3df1551e296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.414925  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 ...
	I1205 20:34:38.414944  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772: {Name:mk5a1762b7078753229c19ae4d408dd983181bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.415108  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:34:38.415228  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:34:38.415320  310801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:34:38.415337  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt with IP's: []
	I1205 20:34:38.595265  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt ...
	I1205 20:34:38.595307  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt: {Name:mke4b60d010e9a42985a4147d8ca20fd58cfe926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595513  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key ...
	I1205 20:34:38.595526  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key: {Name:mkc40847c87fbb64accdbdfed18b0a1220dd4fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595607  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:34:38.595627  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:34:38.595641  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:34:38.595656  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:34:38.595671  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:34:38.595687  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:34:38.595702  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:34:38.595721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:34:38.595781  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:34:38.595820  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:34:38.595832  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:34:38.595867  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:34:38.595927  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:34:38.595965  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:34:38.596013  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:38.596047  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.596065  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.596080  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.596679  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:34:38.621836  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:34:38.645971  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:34:38.669572  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:34:38.692394  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:34:38.714950  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:34:38.737673  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:34:38.760143  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:34:38.782837  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:34:38.804959  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:34:38.827699  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:34:38.850292  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:34:38.866443  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:34:38.872267  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:34:38.883530  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887895  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887977  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.893617  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:34:38.906999  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:34:38.918595  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924117  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924185  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.932047  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:34:38.945495  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:34:38.961962  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966385  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966443  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.971854  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:34:38.983000  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:34:38.987127  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:34:38.987198  310801 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:38.987278  310801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:34:38.987360  310801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:34:39.023266  310801 cri.go:89] found id: ""
	I1205 20:34:39.023363  310801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:34:39.033877  310801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:34:39.044224  310801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:34:39.054571  310801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:34:39.054597  310801 kubeadm.go:157] found existing configuration files:
	
	I1205 20:34:39.054653  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:34:39.064431  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:34:39.064513  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:34:39.074366  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:34:39.083912  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:34:39.083984  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:34:39.093938  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.103398  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:34:39.103465  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.113094  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:34:39.122507  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:34:39.122597  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:34:39.132005  310801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:34:39.228908  310801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:34:39.229049  310801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:34:39.329735  310801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:34:39.329925  310801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:34:39.330069  310801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:34:39.340103  310801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:34:39.373910  310801 out.go:235]   - Generating certificates and keys ...
	I1205 20:34:39.374072  310801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:34:39.374147  310801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:34:39.462096  310801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:34:39.625431  310801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:34:39.899737  310801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:34:40.026923  310801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:34:40.326605  310801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:34:40.326736  310801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:40.487273  310801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:34:40.487463  310801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:41.025029  310801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:34:41.081102  310801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:34:41.372777  310801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:34:41.372851  310801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:34:41.470469  310801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:34:41.550016  310801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:34:41.829563  310801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:34:41.903888  310801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:34:42.075688  310801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:34:42.076191  310801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:34:42.079642  310801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:34:42.116791  310801 out.go:235]   - Booting up control plane ...
	I1205 20:34:42.116956  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:34:42.117092  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:34:42.117208  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:34:42.117347  310801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:34:42.117444  310801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:34:42.117492  310801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:34:42.242074  310801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:34:42.242211  310801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:34:42.743099  310801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.406858ms
	I1205 20:34:42.743201  310801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:34:48.715396  310801 kubeadm.go:310] [api-check] The API server is healthy after 5.976028105s
	I1205 20:34:48.727254  310801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:34:48.744015  310801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:34:49.271812  310801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:34:49.272046  310801 kubeadm.go:310] [mark-control-plane] Marking the node ha-689539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:34:49.283178  310801 kubeadm.go:310] [bootstrap-token] Using token: ynd0vv.39hctrjjdwln7xrk
	I1205 20:34:49.284635  310801 out.go:235]   - Configuring RBAC rules ...
	I1205 20:34:49.284805  310801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:34:49.298869  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:34:49.307342  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:34:49.311034  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:34:49.314220  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:34:49.318275  310801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:34:49.336336  310801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:34:49.603608  310801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:34:50.123229  310801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:34:50.123255  310801 kubeadm.go:310] 
	I1205 20:34:50.123360  310801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:34:50.123388  310801 kubeadm.go:310] 
	I1205 20:34:50.123496  310801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:34:50.123533  310801 kubeadm.go:310] 
	I1205 20:34:50.123584  310801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:34:50.123672  310801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:34:50.123755  310801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:34:50.123771  310801 kubeadm.go:310] 
	I1205 20:34:50.123856  310801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:34:50.123868  310801 kubeadm.go:310] 
	I1205 20:34:50.123942  310801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:34:50.123957  310801 kubeadm.go:310] 
	I1205 20:34:50.124045  310801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:34:50.124156  310801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:34:50.124256  310801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:34:50.124269  310801 kubeadm.go:310] 
	I1205 20:34:50.124397  310801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:34:50.124510  310801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:34:50.124522  310801 kubeadm.go:310] 
	I1205 20:34:50.124645  310801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.124778  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:34:50.124879  310801 kubeadm.go:310] 	--control-plane 
	I1205 20:34:50.124896  310801 kubeadm.go:310] 
	I1205 20:34:50.125023  310801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:34:50.125040  310801 kubeadm.go:310] 
	I1205 20:34:50.125138  310801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.125303  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:34:50.125442  310801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:34:50.125462  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:50.125470  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:50.127293  310801 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:34:50.128597  310801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:34:50.133712  310801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:34:50.133735  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:34:50.151910  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:34:50.498891  310801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:34:50.498983  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:50.498995  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539 minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=true
	I1205 20:34:50.513638  310801 ops.go:34] apiserver oom_adj: -16
	I1205 20:34:50.590747  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.091486  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.591491  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.091553  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.591289  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.091686  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.194917  310801 kubeadm.go:1113] duration metric: took 2.696013148s to wait for elevateKubeSystemPrivileges
	I1205 20:34:53.194977  310801 kubeadm.go:394] duration metric: took 14.207781964s to StartCluster
	I1205 20:34:53.195006  310801 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.195117  310801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.198426  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.198793  310801 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.198831  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:34:53.198863  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:34:53.198850  310801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:34:53.198946  310801 addons.go:69] Setting storage-provisioner=true in profile "ha-689539"
	I1205 20:34:53.198964  310801 addons.go:69] Setting default-storageclass=true in profile "ha-689539"
	I1205 20:34:53.198979  310801 addons.go:234] Setting addon storage-provisioner=true in "ha-689539"
	I1205 20:34:53.198988  310801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-689539"
	I1205 20:34:53.199021  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.199090  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.199551  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199570  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199599  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.199609  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.215764  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1205 20:34:53.216062  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I1205 20:34:53.216436  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.216527  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.217017  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217050  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217168  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217198  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217403  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217563  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217568  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.218173  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.218228  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.219954  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.220226  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:34:53.220737  310801 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 20:34:53.220963  310801 addons.go:234] Setting addon default-storageclass=true in "ha-689539"
	I1205 20:34:53.221000  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.221268  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.221303  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.235358  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I1205 20:34:53.235938  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.236563  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.236595  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.236975  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.237206  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.237645  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1205 20:34:53.238195  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.238727  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.238753  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.239124  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.239183  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.239643  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.239697  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.241617  310801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:34:53.243036  310801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.243058  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:34:53.243080  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.247044  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247514  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.247542  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247718  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.248011  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.248218  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.248413  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.257997  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I1205 20:34:53.258521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.259183  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.259218  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.259691  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.259961  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.262068  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.262345  310801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.262363  310801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:34:53.262386  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.265363  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.265818  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.265848  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.266018  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.266213  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.266327  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.266435  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.311906  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:34:53.428778  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.457287  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.655441  310801 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:34:53.958432  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958460  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958502  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958541  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958824  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958842  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958852  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958860  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958920  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.958929  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958944  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958951  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958957  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.959133  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959149  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959214  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.959271  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959300  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959388  310801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:34:53.959421  310801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:34:53.959540  310801 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:34:53.959549  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.959559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.959569  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.981877  310801 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1205 20:34:53.982523  310801 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:34:53.982543  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.982553  310801 round_trippers.go:473]     Content-Type: application/json
	I1205 20:34:53.982558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.982562  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.985387  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:34:53.985542  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.985554  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.985883  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.985918  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.985939  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.987986  310801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 20:34:53.989183  310801 addons.go:510] duration metric: took 790.33722ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 20:34:53.989228  310801 start.go:246] waiting for cluster config update ...
	I1205 20:34:53.989258  310801 start.go:255] writing updated cluster config ...
	I1205 20:34:53.991007  310801 out.go:201] 
	I1205 20:34:53.992546  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.992653  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.994377  310801 out.go:177] * Starting "ha-689539-m02" control-plane node in "ha-689539" cluster
	I1205 20:34:53.995700  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:53.995727  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:53.995849  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:53.995862  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:53.995934  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.996107  310801 start.go:360] acquireMachinesLock for ha-689539-m02: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:53.996153  310801 start.go:364] duration metric: took 23.521µs to acquireMachinesLock for "ha-689539-m02"
	I1205 20:34:53.996172  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.996237  310801 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 20:34:53.998557  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:53.998670  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.998722  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:54.015008  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1205 20:34:54.015521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:54.016066  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:54.016091  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:54.016507  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:54.016709  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:34:54.016933  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:34:54.017199  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:54.017236  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:54.017303  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:54.017352  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017375  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017449  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:54.017479  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017495  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017521  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:54.017533  310801 main.go:141] libmachine: (ha-689539-m02) Calling .PreCreateCheck
	I1205 20:34:54.017789  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:34:54.018296  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:54.018313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .Create
	I1205 20:34:54.018519  310801 main.go:141] libmachine: (ha-689539-m02) Creating KVM machine...
	I1205 20:34:54.019903  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing default KVM network
	I1205 20:34:54.020058  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing private KVM network mk-ha-689539
	I1205 20:34:54.020167  310801 main.go:141] libmachine: (ha-689539-m02) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.020190  310801 main.go:141] libmachine: (ha-689539-m02) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:54.020273  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.020159  311180 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.020403  310801 main.go:141] libmachine: (ha-689539-m02) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:54.317847  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.317662  311180 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa...
	I1205 20:34:54.529086  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.528946  311180 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk...
	I1205 20:34:54.529124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing magic tar header
	I1205 20:34:54.529140  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing SSH key tar header
	I1205 20:34:54.529158  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.529070  311180 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.529265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02
	I1205 20:34:54.529295  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 (perms=drwx------)
	I1205 20:34:54.529308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:54.529327  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.529337  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:54.529349  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:54.529360  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:54.529372  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:54.529383  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home
	I1205 20:34:54.529398  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:54.529416  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:54.529429  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:54.529443  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:54.529454  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Skipping /home - not owner
	I1205 20:34:54.529461  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:54.530562  310801 main.go:141] libmachine: (ha-689539-m02) define libvirt domain using xml: 
	I1205 20:34:54.530603  310801 main.go:141] libmachine: (ha-689539-m02) <domain type='kvm'>
	I1205 20:34:54.530622  310801 main.go:141] libmachine: (ha-689539-m02)   <name>ha-689539-m02</name>
	I1205 20:34:54.530636  310801 main.go:141] libmachine: (ha-689539-m02)   <memory unit='MiB'>2200</memory>
	I1205 20:34:54.530645  310801 main.go:141] libmachine: (ha-689539-m02)   <vcpu>2</vcpu>
	I1205 20:34:54.530652  310801 main.go:141] libmachine: (ha-689539-m02)   <features>
	I1205 20:34:54.530662  310801 main.go:141] libmachine: (ha-689539-m02)     <acpi/>
	I1205 20:34:54.530667  310801 main.go:141] libmachine: (ha-689539-m02)     <apic/>
	I1205 20:34:54.530672  310801 main.go:141] libmachine: (ha-689539-m02)     <pae/>
	I1205 20:34:54.530676  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.530682  310801 main.go:141] libmachine: (ha-689539-m02)   </features>
	I1205 20:34:54.530687  310801 main.go:141] libmachine: (ha-689539-m02)   <cpu mode='host-passthrough'>
	I1205 20:34:54.530691  310801 main.go:141] libmachine: (ha-689539-m02)   
	I1205 20:34:54.530700  310801 main.go:141] libmachine: (ha-689539-m02)   </cpu>
	I1205 20:34:54.530705  310801 main.go:141] libmachine: (ha-689539-m02)   <os>
	I1205 20:34:54.530714  310801 main.go:141] libmachine: (ha-689539-m02)     <type>hvm</type>
	I1205 20:34:54.530720  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='cdrom'/>
	I1205 20:34:54.530727  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='hd'/>
	I1205 20:34:54.530733  310801 main.go:141] libmachine: (ha-689539-m02)     <bootmenu enable='no'/>
	I1205 20:34:54.530737  310801 main.go:141] libmachine: (ha-689539-m02)   </os>
	I1205 20:34:54.530742  310801 main.go:141] libmachine: (ha-689539-m02)   <devices>
	I1205 20:34:54.530747  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='cdrom'>
	I1205 20:34:54.530762  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/boot2docker.iso'/>
	I1205 20:34:54.530777  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:54.530792  310801 main.go:141] libmachine: (ha-689539-m02)       <readonly/>
	I1205 20:34:54.530801  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530835  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='disk'>
	I1205 20:34:54.530866  310801 main.go:141] libmachine: (ha-689539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:54.530888  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk'/>
	I1205 20:34:54.530900  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hda' bus='virtio'/>
	I1205 20:34:54.530910  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530920  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.530930  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='mk-ha-689539'/>
	I1205 20:34:54.530940  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.530948  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.530963  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.531000  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='default'/>
	I1205 20:34:54.531021  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.531046  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.531060  310801 main.go:141] libmachine: (ha-689539-m02)     <serial type='pty'>
	I1205 20:34:54.531070  310801 main.go:141] libmachine: (ha-689539-m02)       <target port='0'/>
	I1205 20:34:54.531080  310801 main.go:141] libmachine: (ha-689539-m02)     </serial>
	I1205 20:34:54.531092  310801 main.go:141] libmachine: (ha-689539-m02)     <console type='pty'>
	I1205 20:34:54.531101  310801 main.go:141] libmachine: (ha-689539-m02)       <target type='serial' port='0'/>
	I1205 20:34:54.531113  310801 main.go:141] libmachine: (ha-689539-m02)     </console>
	I1205 20:34:54.531124  310801 main.go:141] libmachine: (ha-689539-m02)     <rng model='virtio'>
	I1205 20:34:54.531149  310801 main.go:141] libmachine: (ha-689539-m02)       <backend model='random'>/dev/random</backend>
	I1205 20:34:54.531171  310801 main.go:141] libmachine: (ha-689539-m02)     </rng>
	I1205 20:34:54.531193  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531210  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531219  310801 main.go:141] libmachine: (ha-689539-m02)   </devices>
	I1205 20:34:54.531228  310801 main.go:141] libmachine: (ha-689539-m02) </domain>
	I1205 20:34:54.531253  310801 main.go:141] libmachine: (ha-689539-m02) 
	I1205 20:34:54.538318  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:db:6c:41 in network default
	I1205 20:34:54.538874  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring networks are active...
	I1205 20:34:54.538905  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:54.539900  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network default is active
	I1205 20:34:54.540256  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network mk-ha-689539 is active
	I1205 20:34:54.540685  310801 main.go:141] libmachine: (ha-689539-m02) Getting domain xml...
	I1205 20:34:54.541702  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:55.795769  310801 main.go:141] libmachine: (ha-689539-m02) Waiting to get IP...
	I1205 20:34:55.796704  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:55.797107  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:55.797137  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:55.797080  311180 retry.go:31] will retry after 248.666925ms: waiting for machine to come up
	I1205 20:34:56.047775  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.048308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.048345  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.048228  311180 retry.go:31] will retry after 275.164049ms: waiting for machine to come up
	I1205 20:34:56.324858  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.325265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.325293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.325230  311180 retry.go:31] will retry after 471.642082ms: waiting for machine to come up
	I1205 20:34:56.798901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.799411  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.799445  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.799337  311180 retry.go:31] will retry after 372.986986ms: waiting for machine to come up
	I1205 20:34:57.173842  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.174284  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.174315  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.174243  311180 retry.go:31] will retry after 491.328215ms: waiting for machine to come up
	I1205 20:34:57.666917  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.667363  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.667388  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.667340  311180 retry.go:31] will retry after 701.698041ms: waiting for machine to come up
	I1205 20:34:58.370293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:58.370782  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:58.370813  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:58.370725  311180 retry.go:31] will retry after 750.048133ms: waiting for machine to come up
	I1205 20:34:59.121998  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:59.122452  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:59.122482  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:59.122416  311180 retry.go:31] will retry after 1.373917427s: waiting for machine to come up
	I1205 20:35:00.498001  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:00.498527  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:00.498564  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:00.498461  311180 retry.go:31] will retry after 1.273603268s: waiting for machine to come up
	I1205 20:35:01.773536  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:01.774024  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:01.774055  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:01.773976  311180 retry.go:31] will retry after 1.863052543s: waiting for machine to come up
	I1205 20:35:03.640228  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:03.640744  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:03.640780  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:03.640681  311180 retry.go:31] will retry after 2.126872214s: waiting for machine to come up
	I1205 20:35:05.768939  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:05.769465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:05.769495  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:05.769419  311180 retry.go:31] will retry after 2.492593838s: waiting for machine to come up
	I1205 20:35:08.265013  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:08.265518  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:08.265557  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:08.265445  311180 retry.go:31] will retry after 4.136586499s: waiting for machine to come up
	I1205 20:35:12.405674  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:12.406165  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:12.406195  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:12.406099  311180 retry.go:31] will retry after 4.175170751s: waiting for machine to come up
	I1205 20:35:16.583008  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583448  310801 main.go:141] libmachine: (ha-689539-m02) Found IP for machine: 192.168.39.224
	I1205 20:35:16.583483  310801 main.go:141] libmachine: (ha-689539-m02) Reserving static IP address...
	I1205 20:35:16.583508  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583773  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "ha-689539-m02", mac: "52:54:00:01:ca:45", ip: "192.168.39.224"} in network mk-ha-689539
	I1205 20:35:16.666774  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:16.666819  310801 main.go:141] libmachine: (ha-689539-m02) Reserved static IP address: 192.168.39.224
	I1205 20:35:16.666833  310801 main.go:141] libmachine: (ha-689539-m02) Waiting for SSH to be available...
	I1205 20:35:16.669680  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.670217  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539
	I1205 20:35:16.670248  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find defined IP address of network mk-ha-689539 interface with MAC address 52:54:00:01:ca:45
	I1205 20:35:16.670412  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:16.670440  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:16.670473  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:16.670490  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:16.670506  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:16.675197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:35:16.675236  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:35:16.675246  310801 main.go:141] libmachine: (ha-689539-m02) DBG | command : exit 0
	I1205 20:35:16.675253  310801 main.go:141] libmachine: (ha-689539-m02) DBG | err     : exit status 255
	I1205 20:35:16.675269  310801 main.go:141] libmachine: (ha-689539-m02) DBG | output  : 
	I1205 20:35:19.675465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:19.678124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678615  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.678646  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678752  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:19.678781  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:19.678817  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:19.678840  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:19.678857  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:19.805836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 20:35:19.806152  310801 main.go:141] libmachine: (ha-689539-m02) KVM machine creation complete!
	I1205 20:35:19.806464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:19.807084  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807474  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:35:19.807492  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetState
	I1205 20:35:19.808787  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:35:19.808804  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:35:19.808811  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:35:19.808818  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.811344  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811714  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.811743  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811928  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.812132  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812422  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.812622  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.812860  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.812871  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:35:19.921262  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:19.921299  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:35:19.921312  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.924600  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925051  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.925075  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925275  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.925497  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925651  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925794  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.925996  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.926221  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.926235  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:35:20.039067  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:35:20.039180  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:35:20.039192  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:35:20.039205  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039552  310801 buildroot.go:166] provisioning hostname "ha-689539-m02"
	I1205 20:35:20.039589  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039855  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.043233  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.043789  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.043820  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.044027  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.044236  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044433  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044659  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.044843  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.045030  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.045042  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m02 && echo "ha-689539-m02" | sudo tee /etc/hostname
	I1205 20:35:20.173519  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m02
	
	I1205 20:35:20.173562  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.176643  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.176967  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.176994  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.177264  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.177464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177721  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177868  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.178085  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.178312  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.178329  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:35:20.299145  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:20.299194  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:35:20.299221  310801 buildroot.go:174] setting up certificates
	I1205 20:35:20.299251  310801 provision.go:84] configureAuth start
	I1205 20:35:20.299278  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.299618  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.302873  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.303234  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303352  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.305836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306274  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.306298  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306450  310801 provision.go:143] copyHostCerts
	I1205 20:35:20.306489  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306536  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:35:20.306547  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306613  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:35:20.306694  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306712  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:35:20.306719  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306743  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:35:20.306790  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306807  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:35:20.306813  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306832  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:35:20.306880  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m02 san=[127.0.0.1 192.168.39.224 ha-689539-m02 localhost minikube]
	I1205 20:35:20.462180  310801 provision.go:177] copyRemoteCerts
	I1205 20:35:20.462244  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:35:20.462273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.465164  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465498  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.465526  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465765  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.465979  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.466125  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.466256  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.552142  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:35:20.552248  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:35:20.577611  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:35:20.577693  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:35:20.602829  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:35:20.602927  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:35:20.629296  310801 provision.go:87] duration metric: took 330.013316ms to configureAuth
	I1205 20:35:20.629334  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:35:20.629554  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:20.629672  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.632608  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633010  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.633046  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633219  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.633418  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633617  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633785  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.634021  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.634203  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.634221  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:35:20.861660  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:35:20.861695  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:35:20.861706  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetURL
	I1205 20:35:20.863182  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using libvirt version 6000000
	I1205 20:35:20.865580  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866002  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.866022  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866305  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:35:20.866329  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:35:20.866337  310801 client.go:171] duration metric: took 26.849092016s to LocalClient.Create
	I1205 20:35:20.866366  310801 start.go:167] duration metric: took 26.849169654s to libmachine.API.Create "ha-689539"
	I1205 20:35:20.866385  310801 start.go:293] postStartSetup for "ha-689539-m02" (driver="kvm2")
	I1205 20:35:20.866396  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:35:20.866415  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:20.866737  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:35:20.866782  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.869117  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869511  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.869539  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869712  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.869922  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.870094  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.870213  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.956165  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:35:20.960554  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:35:20.960593  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:35:20.960663  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:35:20.960745  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:35:20.960756  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:35:20.960845  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:35:20.970171  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:20.993469  310801 start.go:296] duration metric: took 127.065366ms for postStartSetup
	I1205 20:35:20.993548  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:20.994261  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.996956  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997403  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.997431  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997694  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:35:20.997894  310801 start.go:128] duration metric: took 27.001645944s to createHost
	I1205 20:35:20.997947  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.000356  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000768  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.000793  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000932  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.001164  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001372  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001567  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.001800  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:21.002023  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:21.002035  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:35:21.114783  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430921.091468988
	
	I1205 20:35:21.114813  310801 fix.go:216] guest clock: 1733430921.091468988
	I1205 20:35:21.114823  310801 fix.go:229] Guest: 2024-12-05 20:35:21.091468988 +0000 UTC Remote: 2024-12-05 20:35:20.997930274 +0000 UTC m=+72.965807310 (delta=93.538714ms)
	I1205 20:35:21.114853  310801 fix.go:200] guest clock delta is within tolerance: 93.538714ms
	I1205 20:35:21.114861  310801 start.go:83] releasing machines lock for "ha-689539-m02", held for 27.118697006s
	I1205 20:35:21.114886  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.115206  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:21.118066  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.118466  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.118504  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.121045  310801 out.go:177] * Found network options:
	I1205 20:35:21.122608  310801 out.go:177]   - NO_PROXY=192.168.39.220
	W1205 20:35:21.124023  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.124097  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.124832  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125105  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125251  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:35:21.125326  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	W1205 20:35:21.125332  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.125435  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:35:21.125468  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.128474  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128563  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128871  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.128901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129000  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.129022  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129073  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129233  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129232  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129435  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129437  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129634  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129634  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.129803  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.365680  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:35:21.371668  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:35:21.371782  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:35:21.388230  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:35:21.388261  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:35:21.388348  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:35:21.404768  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:35:21.419149  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:35:21.419231  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:35:21.433110  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:35:21.447375  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:35:21.563926  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:35:21.729278  310801 docker.go:233] disabling docker service ...
	I1205 20:35:21.729378  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:35:21.744065  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:35:21.757106  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:35:21.878877  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:35:21.983688  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:35:21.997947  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:35:22.016485  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:35:22.016555  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.027185  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:35:22.027270  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.037892  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.048316  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.059131  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:35:22.075255  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.086233  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.103682  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.114441  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:35:22.124360  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:35:22.124442  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:35:22.138043  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:35:22.147996  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:22.253398  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:35:22.348717  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:35:22.348790  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:35:22.353405  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:35:22.353468  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:35:22.357215  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:35:22.393844  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:35:22.393959  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.422018  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.452780  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:35:22.454193  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:35:22.455398  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:22.458243  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458611  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:22.458649  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458851  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:35:22.463124  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:22.475841  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:35:22.476087  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:22.476420  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.476470  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.492198  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I1205 20:35:22.492793  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.493388  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.493418  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.493835  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.494104  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:35:22.495827  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:22.496123  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.496160  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.512684  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I1205 20:35:22.513289  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.513852  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.513877  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.514257  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.514474  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:22.514658  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.224
	I1205 20:35:22.514672  310801 certs.go:194] generating shared ca certs ...
	I1205 20:35:22.514692  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.514826  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:35:22.514868  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:35:22.514875  310801 certs.go:256] generating profile certs ...
	I1205 20:35:22.514942  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:35:22.514966  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736
	I1205 20:35:22.514982  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.254]
	I1205 20:35:22.799808  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 ...
	I1205 20:35:22.799844  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736: {Name:mk805c9f0c218cfc1a14cc95ce5560d63a919c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800063  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 ...
	I1205 20:35:22.800084  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736: {Name:mk878dc23fa761ab4aecc158abe1405fbc550219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800189  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:35:22.800337  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:35:22.800471  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:35:22.800490  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:35:22.800508  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:35:22.800524  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:35:22.800539  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:35:22.800554  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:35:22.800569  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:35:22.800578  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:35:22.800588  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:35:22.800649  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:35:22.800680  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:35:22.800690  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:35:22.800714  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:35:22.800740  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:35:22.800782  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:35:22.800829  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:22.800856  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:22.800870  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:35:22.800883  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:35:22.800924  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:22.803915  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804323  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:22.804357  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804510  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:22.804779  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:22.804968  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:22.805127  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:22.874336  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:35:22.878799  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:35:22.889481  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:35:22.893603  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:35:22.907201  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:35:22.911129  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:35:22.921562  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:35:22.925468  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:35:22.935462  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:35:22.939312  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:35:22.949250  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:35:22.953120  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:35:22.964047  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:35:22.988860  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:35:23.013850  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:35:23.037874  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:35:23.062975  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:35:23.087802  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:35:23.112226  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:35:23.139642  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:35:23.168141  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:35:23.193470  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:35:23.218935  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:35:23.243452  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:35:23.261775  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:35:23.279011  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:35:23.296521  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:35:23.313399  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:35:23.330608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:35:23.349181  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:35:23.366287  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:35:23.372023  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:35:23.383498  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.387933  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.388026  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.393863  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:35:23.405145  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:35:23.416665  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421806  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421882  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.427892  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:35:23.439291  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:35:23.450645  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455301  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455397  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.461088  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:35:23.473062  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:35:23.477238  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:35:23.477315  310801 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.31.2 crio true true} ...
	I1205 20:35:23.477412  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:35:23.477446  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:35:23.477488  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:35:23.494130  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:35:23.494206  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:35:23.494265  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.504559  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:35:23.504639  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.515268  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 20:35:23.515267  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:35:23.515267  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 20:35:23.515420  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.515485  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.520360  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:35:23.520397  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:35:24.329721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.329837  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.335194  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:35:24.335241  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:35:24.693728  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:24.707996  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.708127  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.712643  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:35:24.712685  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:35:25.030158  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:35:25.039864  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:35:25.056953  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:35:25.074038  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:35:25.090341  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:35:25.094291  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:25.106549  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:25.251421  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:25.281544  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:25.281958  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:25.282025  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:25.298815  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1205 20:35:25.299446  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:25.299916  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:25.299940  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:25.300264  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:25.300471  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:25.300647  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:35:25.300755  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:35:25.300777  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:25.303962  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304378  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:25.304416  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304612  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:25.304845  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:25.305034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:25.305189  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:25.467206  310801 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:25.467286  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I1205 20:35:47.115820  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (21.648499033s)
	I1205 20:35:47.115867  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:35:47.674102  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m02 minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:35:47.783659  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:35:47.899441  310801 start.go:319] duration metric: took 22.598789448s to joinCluster
	I1205 20:35:47.899529  310801 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.899871  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.901544  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.903164  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:48.171147  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:48.196654  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:35:48.197028  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:35:48.197120  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:35:48.197520  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:35:48.197656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.197669  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.197681  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.197693  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.214799  310801 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:35:48.697777  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.697812  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.697824  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.697833  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.703691  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.198191  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.198217  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.198225  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.198229  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.204218  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.698048  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.698079  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.698090  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.698096  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.705663  310801 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:35:50.198629  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.198656  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.198669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.198675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.202111  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:50.202581  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:50.698434  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.698457  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.698465  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.698469  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.702335  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.197943  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.197971  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.197981  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.197985  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.201567  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.698634  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.698668  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.698680  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.698687  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.702470  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.198285  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.198318  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.198331  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.198338  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.202116  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.202820  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:52.697909  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.697940  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.697953  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.697959  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.700998  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.198023  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.198047  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.198056  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.198059  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.201259  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.698438  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.698462  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.698478  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.698482  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.701883  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.198346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.198373  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.198381  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.198386  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.202207  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.203013  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:54.698384  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.698407  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.698415  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.698422  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.703135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:35:55.198075  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.198102  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.198111  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.198116  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.275835  310801 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I1205 20:35:55.698292  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.698327  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.698343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.698347  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.701831  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.197819  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.197847  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.197856  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.197861  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.201202  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.698240  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.698288  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.698299  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.698304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.701586  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.702160  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:57.198590  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.198622  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.198633  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.198638  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.201959  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:57.698128  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.698159  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.698170  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.698175  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.703388  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:58.198316  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.198343  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.198352  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.198357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.201617  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.698669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.698694  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.698706  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.698710  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.702292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.702971  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:59.198697  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.198726  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.198739  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.198747  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.205545  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:35:59.698504  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.698536  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.698560  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.702266  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.198245  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.198270  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.198279  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.198283  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.201787  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.698510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.698544  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.698563  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.701802  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.197953  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.197983  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.197994  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.197999  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.201035  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.201711  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:01.698167  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.698198  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.698210  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.698215  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.701264  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.198110  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.198141  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.198152  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.198157  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.201468  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.698626  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.698659  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.698669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.698675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.701881  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.198737  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.198763  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.198774  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.198779  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.202428  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.202953  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:03.698736  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.698768  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.698780  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.698788  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.702162  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.197743  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.197773  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.197784  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.197791  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.201284  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.698126  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.698155  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.698164  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.698168  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.701888  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.198088  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.198121  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.198131  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.198138  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.201797  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.698476  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.698506  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.698515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.698520  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.701875  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.702580  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:06.198021  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.198061  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.198069  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.198074  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.201540  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.202101  310801 node_ready.go:49] node "ha-689539-m02" has status "Ready":"True"
	I1205 20:36:06.202126  310801 node_ready.go:38] duration metric: took 18.004581739s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:36:06.202140  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:06.202253  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:06.202268  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.202278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.202285  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.206754  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.212677  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.212799  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:36:06.212813  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.212822  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.212827  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.215643  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.216276  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.216293  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.216301  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.216304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.218813  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.219400  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.219422  310801 pod_ready.go:82] duration metric: took 6.710961ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219433  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219519  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:36:06.219530  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.219537  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.219544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.221986  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.222730  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.222744  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.222752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.222757  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.225041  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.225536  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.225559  310801 pod_ready.go:82] duration metric: took 6.118464ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225582  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:36:06.225668  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.225684  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.225696  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.228280  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.228948  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.228962  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.228970  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.228974  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.231708  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.232206  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.232225  310801 pod_ready.go:82] duration metric: took 6.631337ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232234  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232328  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:36:06.232338  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.232347  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.232357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.234717  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.235313  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.235328  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.235336  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.235340  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.237446  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.237958  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.237979  310801 pod_ready.go:82] duration metric: took 5.738833ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.237997  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.398468  310801 request.go:632] Waited for 160.38501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398582  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398592  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.398601  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.398605  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.402334  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.598805  310801 request.go:632] Waited for 195.477134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598897  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598903  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.598911  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.598914  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.602945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.603481  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.603505  310801 pod_ready.go:82] duration metric: took 365.497043ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.603516  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.798685  310801 request.go:632] Waited for 195.084248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798771  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798776  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.798786  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.798792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.802375  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.998825  310801 request.go:632] Waited for 195.407022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998895  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998900  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.998908  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.998913  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.003073  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.003620  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.003641  310801 pod_ready.go:82] duration metric: took 400.118288ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.003652  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.198723  310801 request.go:632] Waited for 194.973944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198815  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198822  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.198834  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.198844  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.202792  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.398908  310801 request.go:632] Waited for 195.413458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.398993  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.399006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.399019  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.399029  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.403088  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.403800  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.403838  310801 pod_ready.go:82] duration metric: took 400.178189ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.403856  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.598771  310801 request.go:632] Waited for 194.816012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598840  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598845  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.598862  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.598869  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.602566  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.798831  310801 request.go:632] Waited for 195.438007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798985  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798998  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.799015  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.799023  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.803171  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.803823  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.803849  310801 pod_ready.go:82] duration metric: took 399.978899ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.803864  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.998893  310801 request.go:632] Waited for 194.90975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.998995  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.999006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.999033  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.999050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.003019  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.198483  310801 request.go:632] Waited for 194.725493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198570  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198580  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.198588  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.198592  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.202279  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.202805  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.202824  310801 pod_ready.go:82] duration metric: took 398.949898ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.202837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.399003  310801 request.go:632] Waited for 196.061371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399102  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399110  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.399126  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.399137  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.404511  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:36:08.598657  310801 request.go:632] Waited for 193.397123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598817  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598829  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.598837  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.598850  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.602654  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.603461  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.603483  310801 pod_ready.go:82] duration metric: took 400.640164ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.603494  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.798579  310801 request.go:632] Waited for 194.963606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798680  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.798692  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.798704  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.802678  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.998854  310801 request.go:632] Waited for 195.447294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998947  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998954  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.998964  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.998970  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.003138  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.003792  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.003821  310801 pod_ready.go:82] duration metric: took 400.319353ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.003837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.198016  310801 request.go:632] Waited for 194.088845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198132  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198145  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.198158  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.198165  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.201958  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.398942  310801 request.go:632] Waited for 196.371567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399033  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.399044  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.399050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.402750  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.403404  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.403436  310801 pod_ready.go:82] duration metric: took 399.590034ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.403451  310801 pod_ready.go:39] duration metric: took 3.201294497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:09.403471  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:09.403551  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:09.418357  310801 api_server.go:72] duration metric: took 21.51878718s to wait for apiserver process to appear ...
	I1205 20:36:09.418390  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:09.418420  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:36:09.425381  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:36:09.425471  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:36:09.425479  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.425488  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.425494  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.426343  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:36:09.426447  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:09.426464  310801 api_server.go:131] duration metric: took 8.067774ms to wait for apiserver health ...
	I1205 20:36:09.426481  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:09.598951  310801 request.go:632] Waited for 172.364571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599030  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.599038  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.599042  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.603442  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.609057  310801 system_pods.go:59] 17 kube-system pods found
	I1205 20:36:09.609099  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:09.609107  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:09.609113  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:09.609121  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:09.609126  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:09.609130  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:09.609136  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:09.609142  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:09.609149  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:09.609159  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:09.609165  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:09.609174  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:09.609180  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:09.609186  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:09.609192  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:09.609200  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:09.609207  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:09.609218  310801 system_pods.go:74] duration metric: took 182.726007ms to wait for pod list to return data ...
	I1205 20:36:09.609232  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:09.798716  310801 request.go:632] Waited for 189.385773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798784  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798789  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.798797  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.798800  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.803434  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.803720  310801 default_sa.go:45] found service account: "default"
	I1205 20:36:09.803742  310801 default_sa.go:55] duration metric: took 194.50158ms for default service account to be created ...
	I1205 20:36:09.803755  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:09.998902  310801 request.go:632] Waited for 195.036574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998984  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998992  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.999004  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.999012  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.005341  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:36:10.009685  310801 system_pods.go:86] 17 kube-system pods found
	I1205 20:36:10.009721  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:10.009733  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:10.009739  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:10.009745  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:10.009751  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:10.009756  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:10.009760  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:10.009770  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:10.009774  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:10.009778  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:10.009782  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:10.009786  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:10.009789  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:10.009794  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:10.009797  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:10.009802  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:10.009805  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:10.009814  310801 system_pods.go:126] duration metric: took 206.05156ms to wait for k8s-apps to be running ...
	I1205 20:36:10.009825  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:10.009874  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:10.025329  310801 system_svc.go:56] duration metric: took 15.491147ms WaitForService to wait for kubelet
	I1205 20:36:10.025382  310801 kubeadm.go:582] duration metric: took 22.125819174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:10.025410  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:10.199031  310801 request.go:632] Waited for 173.477614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199134  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199143  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:10.199154  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:10.199159  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.202963  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:10.203807  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203836  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203848  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203851  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203855  310801 node_conditions.go:105] duration metric: took 178.44033ms to run NodePressure ...
	I1205 20:36:10.203870  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:36:10.203895  310801 start.go:255] writing updated cluster config ...
	I1205 20:36:10.205987  310801 out.go:201] 
	I1205 20:36:10.207492  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:10.207614  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.209270  310801 out.go:177] * Starting "ha-689539-m03" control-plane node in "ha-689539" cluster
	I1205 20:36:10.210621  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:36:10.210654  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:36:10.210766  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:36:10.210778  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:36:10.210880  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.211060  310801 start.go:360] acquireMachinesLock for ha-689539-m03: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:36:10.211107  310801 start.go:364] duration metric: took 26.599µs to acquireMachinesLock for "ha-689539-m03"
	I1205 20:36:10.211127  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:10.211224  310801 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 20:36:10.213644  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:36:10.213846  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:10.213895  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:10.230607  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 20:36:10.231136  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:10.231708  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:10.231730  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:10.232163  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:10.232486  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:10.232681  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:10.232898  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:36:10.232939  310801 client.go:168] LocalClient.Create starting
	I1205 20:36:10.232979  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:36:10.233029  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233052  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233142  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:36:10.233176  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233191  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233315  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:36:10.233332  310801 main.go:141] libmachine: (ha-689539-m03) Calling .PreCreateCheck
	I1205 20:36:10.233549  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:10.234493  310801 main.go:141] libmachine: Creating machine...
	I1205 20:36:10.234513  310801 main.go:141] libmachine: (ha-689539-m03) Calling .Create
	I1205 20:36:10.234674  310801 main.go:141] libmachine: (ha-689539-m03) Creating KVM machine...
	I1205 20:36:10.236332  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing default KVM network
	I1205 20:36:10.236451  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing private KVM network mk-ha-689539
	I1205 20:36:10.236656  310801 main.go:141] libmachine: (ha-689539-m03) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.236685  310801 main.go:141] libmachine: (ha-689539-m03) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:36:10.236729  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.236616  311584 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.236870  310801 main.go:141] libmachine: (ha-689539-m03) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:36:10.551771  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.551634  311584 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa...
	I1205 20:36:10.671521  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671352  311584 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk...
	I1205 20:36:10.671562  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing magic tar header
	I1205 20:36:10.671575  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing SSH key tar header
	I1205 20:36:10.671584  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671500  311584 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.671596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03
	I1205 20:36:10.671680  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 (perms=drwx------)
	I1205 20:36:10.671707  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:36:10.671718  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:36:10.671731  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.671740  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:36:10.671749  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:36:10.671759  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:36:10.671770  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home
	I1205 20:36:10.671781  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Skipping /home - not owner
	I1205 20:36:10.671795  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:36:10.671811  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:36:10.671827  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:36:10.671837  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:36:10.671843  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:10.672929  310801 main.go:141] libmachine: (ha-689539-m03) define libvirt domain using xml: 
	I1205 20:36:10.672953  310801 main.go:141] libmachine: (ha-689539-m03) <domain type='kvm'>
	I1205 20:36:10.672970  310801 main.go:141] libmachine: (ha-689539-m03)   <name>ha-689539-m03</name>
	I1205 20:36:10.673070  310801 main.go:141] libmachine: (ha-689539-m03)   <memory unit='MiB'>2200</memory>
	I1205 20:36:10.673100  310801 main.go:141] libmachine: (ha-689539-m03)   <vcpu>2</vcpu>
	I1205 20:36:10.673109  310801 main.go:141] libmachine: (ha-689539-m03)   <features>
	I1205 20:36:10.673135  310801 main.go:141] libmachine: (ha-689539-m03)     <acpi/>
	I1205 20:36:10.673151  310801 main.go:141] libmachine: (ha-689539-m03)     <apic/>
	I1205 20:36:10.673157  310801 main.go:141] libmachine: (ha-689539-m03)     <pae/>
	I1205 20:36:10.673164  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673174  310801 main.go:141] libmachine: (ha-689539-m03)   </features>
	I1205 20:36:10.673181  310801 main.go:141] libmachine: (ha-689539-m03)   <cpu mode='host-passthrough'>
	I1205 20:36:10.673187  310801 main.go:141] libmachine: (ha-689539-m03)   
	I1205 20:36:10.673192  310801 main.go:141] libmachine: (ha-689539-m03)   </cpu>
	I1205 20:36:10.673197  310801 main.go:141] libmachine: (ha-689539-m03)   <os>
	I1205 20:36:10.673201  310801 main.go:141] libmachine: (ha-689539-m03)     <type>hvm</type>
	I1205 20:36:10.673243  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='cdrom'/>
	I1205 20:36:10.673298  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='hd'/>
	I1205 20:36:10.673335  310801 main.go:141] libmachine: (ha-689539-m03)     <bootmenu enable='no'/>
	I1205 20:36:10.673362  310801 main.go:141] libmachine: (ha-689539-m03)   </os>
	I1205 20:36:10.673384  310801 main.go:141] libmachine: (ha-689539-m03)   <devices>
	I1205 20:36:10.673401  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='cdrom'>
	I1205 20:36:10.673424  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/boot2docker.iso'/>
	I1205 20:36:10.673445  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hdc' bus='scsi'/>
	I1205 20:36:10.673458  310801 main.go:141] libmachine: (ha-689539-m03)       <readonly/>
	I1205 20:36:10.673469  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673485  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='disk'>
	I1205 20:36:10.673499  310801 main.go:141] libmachine: (ha-689539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:36:10.673516  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk'/>
	I1205 20:36:10.673532  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hda' bus='virtio'/>
	I1205 20:36:10.673544  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673556  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673569  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='mk-ha-689539'/>
	I1205 20:36:10.673579  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673592  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673600  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673612  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='default'/>
	I1205 20:36:10.673625  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673635  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673648  310801 main.go:141] libmachine: (ha-689539-m03)     <serial type='pty'>
	I1205 20:36:10.673660  310801 main.go:141] libmachine: (ha-689539-m03)       <target port='0'/>
	I1205 20:36:10.673672  310801 main.go:141] libmachine: (ha-689539-m03)     </serial>
	I1205 20:36:10.673682  310801 main.go:141] libmachine: (ha-689539-m03)     <console type='pty'>
	I1205 20:36:10.673695  310801 main.go:141] libmachine: (ha-689539-m03)       <target type='serial' port='0'/>
	I1205 20:36:10.673711  310801 main.go:141] libmachine: (ha-689539-m03)     </console>
	I1205 20:36:10.673724  310801 main.go:141] libmachine: (ha-689539-m03)     <rng model='virtio'>
	I1205 20:36:10.673736  310801 main.go:141] libmachine: (ha-689539-m03)       <backend model='random'>/dev/random</backend>
	I1205 20:36:10.673747  310801 main.go:141] libmachine: (ha-689539-m03)     </rng>
	I1205 20:36:10.673756  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673766  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673776  310801 main.go:141] libmachine: (ha-689539-m03)   </devices>
	I1205 20:36:10.673790  310801 main.go:141] libmachine: (ha-689539-m03) </domain>
	I1205 20:36:10.673800  310801 main.go:141] libmachine: (ha-689539-m03) 
	I1205 20:36:10.681042  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:ee:34:51 in network default
	I1205 20:36:10.681639  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring networks are active...
	I1205 20:36:10.681669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:10.682561  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network default is active
	I1205 20:36:10.682898  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network mk-ha-689539 is active
	I1205 20:36:10.683183  310801 main.go:141] libmachine: (ha-689539-m03) Getting domain xml...
	I1205 20:36:10.684006  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:11.968725  310801 main.go:141] libmachine: (ha-689539-m03) Waiting to get IP...
	I1205 20:36:11.969610  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:11.970152  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:11.970185  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:11.970125  311584 retry.go:31] will retry after 234.218675ms: waiting for machine to come up
	I1205 20:36:12.205669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.206261  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.206294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.206205  311584 retry.go:31] will retry after 248.695417ms: waiting for machine to come up
	I1205 20:36:12.456801  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.457402  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.457438  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.457352  311584 retry.go:31] will retry after 446.513744ms: waiting for machine to come up
	I1205 20:36:12.906122  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.906634  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.906661  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.906574  311584 retry.go:31] will retry after 535.02916ms: waiting for machine to come up
	I1205 20:36:13.443469  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:13.443918  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:13.443943  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:13.443872  311584 retry.go:31] will retry after 557.418366ms: waiting for machine to come up
	I1205 20:36:14.002733  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.003294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.003322  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.003249  311584 retry.go:31] will retry after 653.304587ms: waiting for machine to come up
	I1205 20:36:14.658664  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.659072  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.659104  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.659017  311584 retry.go:31] will retry after 755.842871ms: waiting for machine to come up
	I1205 20:36:15.416424  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:15.416833  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:15.416859  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:15.416766  311584 retry.go:31] will retry after 1.249096202s: waiting for machine to come up
	I1205 20:36:16.666996  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:16.667456  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:16.667487  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:16.667406  311584 retry.go:31] will retry after 1.829752255s: waiting for machine to come up
	I1205 20:36:18.499154  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:18.499722  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:18.499754  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:18.499656  311584 retry.go:31] will retry after 2.088301292s: waiting for machine to come up
	I1205 20:36:20.590033  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:20.590599  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:20.590952  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:20.590835  311584 retry.go:31] will retry after 2.856395806s: waiting for machine to come up
	I1205 20:36:23.448567  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:23.449151  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:23.449196  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:23.449071  311584 retry.go:31] will retry after 2.566118647s: waiting for machine to come up
	I1205 20:36:26.016596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:26.017066  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:26.017103  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:26.017002  311584 retry.go:31] will retry after 3.311993098s: waiting for machine to come up
	I1205 20:36:29.332519  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:29.333028  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:29.333062  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:29.332969  311584 retry.go:31] will retry after 5.069674559s: waiting for machine to come up
	I1205 20:36:34.404036  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404592  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404615  310801 main.go:141] libmachine: (ha-689539-m03) Found IP for machine: 192.168.39.133
	I1205 20:36:34.404628  310801 main.go:141] libmachine: (ha-689539-m03) Reserving static IP address...
	I1205 20:36:34.405246  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find host DHCP lease matching {name: "ha-689539-m03", mac: "52:54:00:39:1e:d2", ip: "192.168.39.133"} in network mk-ha-689539
	I1205 20:36:34.488202  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Getting to WaitForSSH function...
	I1205 20:36:34.488243  310801 main.go:141] libmachine: (ha-689539-m03) Reserved static IP address: 192.168.39.133
	I1205 20:36:34.488263  310801 main.go:141] libmachine: (ha-689539-m03) Waiting for SSH to be available...
	I1205 20:36:34.491165  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491686  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.491716  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491906  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH client type: external
	I1205 20:36:34.491935  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa (-rw-------)
	I1205 20:36:34.491973  310801 main.go:141] libmachine: (ha-689539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:36:34.491988  310801 main.go:141] libmachine: (ha-689539-m03) DBG | About to run SSH command:
	I1205 20:36:34.492018  310801 main.go:141] libmachine: (ha-689539-m03) DBG | exit 0
	I1205 20:36:34.613832  310801 main.go:141] libmachine: (ha-689539-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 20:36:34.614085  310801 main.go:141] libmachine: (ha-689539-m03) KVM machine creation complete!
	I1205 20:36:34.614391  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:34.614932  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615098  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615251  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:36:34.615261  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetState
	I1205 20:36:34.616613  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:36:34.616630  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:36:34.616635  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:36:34.616641  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.618898  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619343  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.619376  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619553  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.619760  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.619916  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.620049  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.620212  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.620459  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.620479  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:36:34.717073  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:34.717099  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:36:34.717108  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.720008  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720375  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.720408  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720627  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.720862  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721027  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721142  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.721315  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.721505  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.721517  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:36:34.822906  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:36:34.822984  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:36:34.822991  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:36:34.823000  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823269  310801 buildroot.go:166] provisioning hostname "ha-689539-m03"
	I1205 20:36:34.823307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823547  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.826120  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826479  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.826516  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826688  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.826881  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.827324  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.827499  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.827512  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m03 && echo "ha-689539-m03" | sudo tee /etc/hostname
	I1205 20:36:34.941581  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m03
	
	I1205 20:36:34.941620  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.944840  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.945268  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945576  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.945808  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946279  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.946488  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.946701  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.946720  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:36:35.058548  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:35.058600  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:36:35.058628  310801 buildroot.go:174] setting up certificates
	I1205 20:36:35.058647  310801 provision.go:84] configureAuth start
	I1205 20:36:35.058666  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:35.059012  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.062020  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062410  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.062436  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062601  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.064649  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065013  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.065056  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065157  310801 provision.go:143] copyHostCerts
	I1205 20:36:35.065216  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065250  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:36:35.065260  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065330  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:36:35.065453  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065483  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:36:35.065487  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065514  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:36:35.065573  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065599  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:36:35.065606  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065628  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:36:35.065689  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m03 san=[127.0.0.1 192.168.39.133 ha-689539-m03 localhost minikube]
	I1205 20:36:35.249027  310801 provision.go:177] copyRemoteCerts
	I1205 20:36:35.249088  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:36:35.249117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.252102  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252464  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.252504  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252651  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.252859  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.253052  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.253206  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.336527  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:36:35.336648  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:36:35.364926  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:36:35.365010  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:36:35.389088  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:36:35.389182  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:36:35.413330  310801 provision.go:87] duration metric: took 354.660436ms to configureAuth
	I1205 20:36:35.413369  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:36:35.413628  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:35.413732  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.416617  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417048  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.417083  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417297  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.417511  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417805  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.417979  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.418155  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.418171  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:36:35.630886  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:36:35.630926  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:36:35.630937  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetURL
	I1205 20:36:35.632212  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using libvirt version 6000000
	I1205 20:36:35.634750  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635203  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.635240  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635427  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:36:35.635448  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:36:35.635459  310801 client.go:171] duration metric: took 25.402508958s to LocalClient.Create
	I1205 20:36:35.635491  310801 start.go:167] duration metric: took 25.402598488s to libmachine.API.Create "ha-689539"
	I1205 20:36:35.635506  310801 start.go:293] postStartSetup for "ha-689539-m03" (driver="kvm2")
	I1205 20:36:35.635522  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:36:35.635550  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.635824  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:36:35.635854  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.638327  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638682  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.638711  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638841  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.639048  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.639222  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.639398  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.716587  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:36:35.720718  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:36:35.720755  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:36:35.720843  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:36:35.720950  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:36:35.720963  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:36:35.721055  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:36:35.730580  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:35.754106  310801 start.go:296] duration metric: took 118.58052ms for postStartSetup
	I1205 20:36:35.754171  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:35.754838  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.757466  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.757836  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.757867  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.758185  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:35.758409  310801 start.go:128] duration metric: took 25.547174356s to createHost
	I1205 20:36:35.758437  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.760535  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.760919  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.760950  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.761090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.761312  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761499  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761662  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.761847  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.762082  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.762095  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:36:35.859212  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430995.835523026
	
	I1205 20:36:35.859238  310801 fix.go:216] guest clock: 1733430995.835523026
	I1205 20:36:35.859249  310801 fix.go:229] Guest: 2024-12-05 20:36:35.835523026 +0000 UTC Remote: 2024-12-05 20:36:35.758424054 +0000 UTC m=+147.726301003 (delta=77.098972ms)
	I1205 20:36:35.859274  310801 fix.go:200] guest clock delta is within tolerance: 77.098972ms
	I1205 20:36:35.859282  310801 start.go:83] releasing machines lock for "ha-689539-m03", held for 25.648163663s
	I1205 20:36:35.859307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.859602  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.862387  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.862741  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.862765  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.864694  310801 out.go:177] * Found network options:
	I1205 20:36:35.865935  310801 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.224
	W1205 20:36:35.866955  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.866981  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.867029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867701  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867901  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.868027  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:36:35.868079  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	W1205 20:36:35.868103  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.868132  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.868211  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:36:35.868237  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.870846  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.870889  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871267  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871290  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871306  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871412  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871420  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871631  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871634  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871849  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.871887  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.872025  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.872048  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:36.107172  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:36:36.113768  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:36:36.113852  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:36:36.130072  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:36:36.130105  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:36:36.130199  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:36:36.146210  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:36:36.161285  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:36:36.161367  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:36:36.177064  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:36:36.191545  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:36:36.311400  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:36:36.466588  310801 docker.go:233] disabling docker service ...
	I1205 20:36:36.466685  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:36:36.482756  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:36:36.496706  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:36:36.652172  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:36:36.763760  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:36:36.778126  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:36:36.798464  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:36:36.798550  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.809701  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:36:36.809789  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.821480  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.833057  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.844011  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:36:36.855643  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.866916  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.884661  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.895900  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:36:36.907780  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:36:36.907872  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:36:36.923847  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:36:36.935618  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:37.050068  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:36:37.145134  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:36:37.145210  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:36:37.149942  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:36:37.150018  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:36:37.153774  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:36:37.191365  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:36:37.191476  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.218944  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.247248  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:36:37.248847  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:36:37.250408  310801 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.224
	I1205 20:36:37.251670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:37.254710  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255219  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:37.255255  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255473  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:36:37.259811  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:37.272313  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:36:37.272621  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:37.272965  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.273029  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.288738  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I1205 20:36:37.289258  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.289795  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.289824  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.290243  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.290461  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:36:37.292309  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:37.292619  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.292658  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.308415  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I1205 20:36:37.308950  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.309550  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.309579  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.309955  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.310189  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:37.310389  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.133
	I1205 20:36:37.310408  310801 certs.go:194] generating shared ca certs ...
	I1205 20:36:37.310434  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.310698  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:36:37.310756  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:36:37.310770  310801 certs.go:256] generating profile certs ...
	I1205 20:36:37.310865  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:36:37.310896  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf
	I1205 20:36:37.310913  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:36:37.437144  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf ...
	I1205 20:36:37.437188  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf: {Name:mk0c5897cd83a4093b7a3399e7e587e00b7a5bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437391  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf ...
	I1205 20:36:37.437408  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf: {Name:mk1d8d484e615bf29a9b64d40295dea265ce443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437485  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:36:37.437626  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:36:37.437756  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:36:37.437772  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:36:37.437788  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:36:37.437801  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:36:37.437813  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:36:37.437826  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:36:37.437841  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:36:37.437853  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:36:37.437864  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:36:37.437944  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:36:37.437979  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:36:37.437990  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:36:37.438014  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:36:37.438035  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:36:37.438056  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:36:37.438094  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:37.438120  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:36:37.438137  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:37.438154  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:36:37.438200  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:37.441695  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442183  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:37.442215  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442405  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:37.442622  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:37.442798  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:37.443004  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:37.518292  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:36:37.523367  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:36:37.534644  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:36:37.538903  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:36:37.550288  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:36:37.554639  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:36:37.564857  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:36:37.569390  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:36:37.579805  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:36:37.583826  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:36:37.594623  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:36:37.598518  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:36:37.609622  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:36:37.635232  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:36:37.659198  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:36:37.684613  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:36:37.709156  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 20:36:37.734432  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:36:37.759134  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:36:37.782683  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:36:37.806069  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:36:37.829365  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:36:37.854671  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:36:37.877683  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:36:37.895648  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:36:37.911843  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:36:37.928819  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:36:37.945608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:36:37.961295  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:36:37.977148  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:36:37.993888  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:36:37.999493  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:36:38.010566  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014911  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014995  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.021306  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:36:38.033265  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:36:38.045021  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049577  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049655  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.055689  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:36:38.066840  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:36:38.077747  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082720  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082788  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.088581  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:36:38.099228  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:36:38.103604  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:36:38.103672  310801 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.2 crio true true} ...
	I1205 20:36:38.103798  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:36:38.103838  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:36:38.103889  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:36:38.119642  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:36:38.119740  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:36:38.119812  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.130177  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:36:38.130245  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:36:38.140783  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140794  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140777  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 20:36:38.140857  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140859  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140888  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:38.158074  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.158135  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:36:38.158086  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:36:38.158177  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:36:38.158206  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:36:38.158247  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.186188  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:36:38.186252  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:36:39.060124  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:36:39.071107  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:36:39.088307  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:36:39.105414  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:36:39.123515  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:36:39.128382  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:39.141817  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:39.272056  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:36:39.288864  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:39.289220  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:39.289280  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:39.306323  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1205 20:36:39.306810  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:39.307385  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:39.307405  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:39.307730  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:39.308000  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:39.308176  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:36:39.308320  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:36:39.308347  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:39.311767  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312246  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:39.312274  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312449  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:39.312636  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:39.312767  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:39.312941  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:39.465515  310801 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:39.465587  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I1205 20:37:01.441014  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (21.975379722s)
	I1205 20:37:01.441134  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:37:02.017063  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m03 minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:37:02.122818  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:37:02.233408  310801 start.go:319] duration metric: took 22.92521337s to joinCluster
	I1205 20:37:02.233514  310801 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:02.233929  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:02.235271  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:02.236630  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:02.508423  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:02.527064  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:37:02.527473  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:37:02.527594  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:37:02.527913  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:02.528026  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:02.528040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:02.528051  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:02.528056  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:02.557537  310801 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1205 20:37:03.028186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.028214  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.028223  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.028228  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.031783  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:03.528844  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.528876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.528889  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.528897  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.532449  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.028344  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.028385  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.028391  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.031602  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.528319  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.528352  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.528375  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.528382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.532891  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:04.534060  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:05.028293  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.028328  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.028339  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.028344  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.032338  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:05.529271  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.529311  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.529323  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.529330  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.533411  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:06.028510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.028536  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.028545  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.028550  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.032362  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:06.529188  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.529215  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.529224  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.529229  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.533150  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.029082  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.029108  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.029117  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.029120  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.033089  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.033768  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:07.528440  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.528471  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.528481  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.528485  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.531953  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.028337  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.028382  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.028395  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.028399  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.031906  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.528836  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.528876  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.528881  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.532443  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.028243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.028270  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.028278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.028286  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.031717  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.528911  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.528939  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.528948  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.528953  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.532309  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.532990  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:10.028349  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.028377  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.028386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.028390  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.031930  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:10.528611  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.528635  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.528645  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.528650  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.532023  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.028888  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.028914  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.028923  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.028928  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.032482  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.528496  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.528521  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.528530  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.528534  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.532719  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:11.533217  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:12.028518  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.028550  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.028559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.028562  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.031616  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:12.528837  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.528873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.528876  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.532925  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:13.028348  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.028382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.028385  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.031413  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:13.528247  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.528272  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.528282  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.528289  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.531837  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.028958  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.028983  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.028991  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.028994  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.032387  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.032980  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:14.528243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.528268  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.528276  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.528281  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.533135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:15.029156  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.029181  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.029190  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.029194  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.032772  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:15.528703  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.528727  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.528736  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.528740  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.532084  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.029136  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.029163  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.029172  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.029177  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.032419  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.033160  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:16.528509  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.528535  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.528546  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.528553  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.532163  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.028228  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.028256  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.028265  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.028270  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.031611  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.528262  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.528285  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.528294  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.528298  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.532186  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:18.028484  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.028590  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.028610  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.028619  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.032661  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:18.033298  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:18.528576  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.528603  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.528612  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.528622  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.531605  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.028544  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.028570  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.028579  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.028583  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.031945  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.528716  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.528741  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.528752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.528758  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.532114  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.532722  310801 node_ready.go:49] node "ha-689539-m03" has status "Ready":"True"
	I1205 20:37:19.532746  310801 node_ready.go:38] duration metric: took 17.004806597s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:19.532759  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:19.532848  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:19.532862  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.532873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.532877  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.538433  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:19.545193  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.545310  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:37:19.545322  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.545335  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.545343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.548548  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.549181  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.549197  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.549208  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.549214  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.551745  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.552315  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.552336  310801 pod_ready.go:82] duration metric: took 7.114081ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552347  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552426  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:37:19.552436  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.552443  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.552449  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.555044  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.555688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.555703  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.555714  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.555719  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.558507  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.558964  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.558984  310801 pod_ready.go:82] duration metric: took 6.630508ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.558996  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.559064  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:37:19.559075  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.559086  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.559093  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.561702  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.562346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.562362  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.562373  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.562379  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.564859  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.565270  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.565289  310801 pod_ready.go:82] duration metric: took 6.285995ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565301  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565364  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:37:19.565376  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.565386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.565394  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.567843  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.568351  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:19.568369  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.568381  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.568386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.570730  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.571216  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.571233  310801 pod_ready.go:82] duration metric: took 5.925226ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.571242  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.729689  310801 request.go:632] Waited for 158.375356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729775  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729781  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.729791  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.729798  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.733549  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.929796  310801 request.go:632] Waited for 195.378991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929889  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.929915  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.929920  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.933398  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.934088  310801 pod_ready.go:93] pod "etcd-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.934113  310801 pod_ready.go:82] duration metric: took 362.864968ms for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.934133  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.129093  310801 request.go:632] Waited for 194.866664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129174  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129180  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.129188  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.129192  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.132632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.329356  310801 request.go:632] Waited for 195.935231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329441  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329451  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.329463  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.329476  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.333292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.333939  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.333972  310801 pod_ready.go:82] duration metric: took 399.826342ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.333988  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.529058  310801 request.go:632] Waited for 194.978446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529147  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529166  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.529197  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.529204  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.532832  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.729074  310801 request.go:632] Waited for 195.37241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729139  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729144  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.729153  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.729156  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.733037  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.733831  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.733861  310801 pod_ready.go:82] duration metric: took 399.862982ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.733880  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.928790  310801 request.go:632] Waited for 194.758856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928868  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.928884  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.928894  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.931768  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:21.128920  310801 request.go:632] Waited for 196.30741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129013  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129018  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.129026  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.129030  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.132989  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.133733  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.133764  310801 pod_ready.go:82] duration metric: took 399.87672ms for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.133777  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.329719  310801 request.go:632] Waited for 195.840899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329822  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329829  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.329840  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.329848  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.335472  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:21.529593  310801 request.go:632] Waited for 193.3652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529700  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.529710  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.529721  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.533118  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.533743  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.533773  310801 pod_ready.go:82] duration metric: took 399.989891ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.533788  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.729770  310801 request.go:632] Waited for 195.887392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729855  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729863  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.729871  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.729877  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.733541  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.929705  310801 request.go:632] Waited for 195.397002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929774  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.929787  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.929792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.933945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:21.935117  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.935147  310801 pod_ready.go:82] duration metric: took 401.346008ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.935163  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.129158  310801 request.go:632] Waited for 193.90126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129263  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129281  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.129291  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.129295  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.132774  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.329309  310801 request.go:632] Waited for 195.820597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329371  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329397  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.329412  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.329417  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.332841  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.336218  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.336243  310801 pod_ready.go:82] duration metric: took 401.071031ms for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.336259  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.528770  310801 request.go:632] Waited for 192.411741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528833  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528838  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.528846  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.528850  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.531900  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.729073  310801 request.go:632] Waited for 196.313572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729196  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.729206  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.729212  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.732421  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.733074  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.733099  310801 pod_ready.go:82] duration metric: took 396.833211ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.733111  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.929342  310801 request.go:632] Waited for 196.122694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929410  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929416  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.929425  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.929430  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.932878  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.129758  310801 request.go:632] Waited for 196.113609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129841  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129849  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.129861  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.129874  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.133246  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.133786  310801 pod_ready.go:93] pod "kube-proxy-dktwc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.133805  310801 pod_ready.go:82] duration metric: took 400.688784ms for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.133815  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.329685  310801 request.go:632] Waited for 195.763713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329769  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.329788  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.329795  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.333599  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.528890  310801 request.go:632] Waited for 194.302329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528951  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528955  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.528966  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.528973  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.533840  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:23.534667  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.534691  310801 pod_ready.go:82] duration metric: took 400.868432ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.534705  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.728815  310801 request.go:632] Waited for 194.018306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728888  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.728896  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.728900  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.732452  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.929580  310801 request.go:632] Waited for 196.394135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929653  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929659  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.929667  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.929672  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.933364  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.934147  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.934174  310801 pod_ready.go:82] duration metric: took 399.459723ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.934191  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.129685  310801 request.go:632] Waited for 195.380858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129776  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129789  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.129800  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.129811  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.133305  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.329438  310801 request.go:632] Waited for 195.320628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329517  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329525  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.329544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.329550  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.333177  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.333763  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.333790  310801 pod_ready.go:82] duration metric: took 399.589908ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.333806  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.528866  310801 request.go:632] Waited for 194.951078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528969  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528982  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.528997  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.529004  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.532632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.729734  310801 request.go:632] Waited for 196.398947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729824  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729835  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.729847  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.729855  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.733450  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.734057  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.734085  310801 pod_ready.go:82] duration metric: took 400.271075ms for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.734104  310801 pod_ready.go:39] duration metric: took 5.201330389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:24.734128  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:24.734202  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:24.752010  310801 api_server.go:72] duration metric: took 22.518451158s to wait for apiserver process to appear ...
	I1205 20:37:24.752054  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:24.752086  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:37:24.756435  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:37:24.756538  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:37:24.756551  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.756561  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.756569  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.757464  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:37:24.757533  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:24.757548  310801 api_server.go:131] duration metric: took 5.486922ms to wait for apiserver health ...
	I1205 20:37:24.757559  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:24.928965  310801 request.go:632] Waited for 171.296323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929035  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.929049  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.929054  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.935151  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:24.941691  310801 system_pods.go:59] 24 kube-system pods found
	I1205 20:37:24.941733  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:24.941739  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:24.941742  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:24.941746  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:24.941752  310801 system_pods.go:61] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:24.941756  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:24.941759  310801 system_pods.go:61] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:24.941763  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:24.941766  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:24.941770  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:24.941815  310801 system_pods.go:61] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:24.941833  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:24.941841  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:24.941847  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:24.941854  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:24.941860  310801 system_pods.go:61] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:24.941869  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:24.941875  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:24.941883  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:24.941889  310801 system_pods.go:61] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:24.941915  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:24.941922  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:24.941930  310801 system_pods.go:61] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:24.941939  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:24.941947  310801 system_pods.go:74] duration metric: took 184.37937ms to wait for pod list to return data ...
	I1205 20:37:24.941962  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:25.129425  310801 request.go:632] Waited for 187.3488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129501  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129507  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.129515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.129519  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.133730  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.133919  310801 default_sa.go:45] found service account: "default"
	I1205 20:37:25.133941  310801 default_sa.go:55] duration metric: took 191.967731ms for default service account to be created ...
	I1205 20:37:25.133958  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:25.329286  310801 request.go:632] Waited for 195.223367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329372  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329380  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.329392  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.329406  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.335635  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:25.341932  310801 system_pods.go:86] 24 kube-system pods found
	I1205 20:37:25.341974  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:25.341980  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:25.341986  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:25.341990  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:25.341993  310801 system_pods.go:89] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:25.341996  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:25.342000  310801 system_pods.go:89] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:25.342003  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:25.342008  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:25.342011  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:25.342015  310801 system_pods.go:89] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:25.342018  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:25.342022  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:25.342025  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:25.342029  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:25.342035  310801 system_pods.go:89] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:25.342039  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:25.342043  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:25.342047  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:25.342053  310801 system_pods.go:89] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:25.342056  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:25.342059  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:25.342063  310801 system_pods.go:89] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:25.342067  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:25.342077  310801 system_pods.go:126] duration metric: took 208.11212ms to wait for k8s-apps to be running ...
	I1205 20:37:25.342087  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:25.342141  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:25.359925  310801 system_svc.go:56] duration metric: took 17.820163ms WaitForService to wait for kubelet
	I1205 20:37:25.359969  310801 kubeadm.go:582] duration metric: took 23.126420152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:25.359998  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:25.529464  310801 request.go:632] Waited for 169.34708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529531  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529543  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.529553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.529558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.534297  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.535249  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535281  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535294  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535298  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535302  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535306  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535318  310801 node_conditions.go:105] duration metric: took 175.313275ms to run NodePressure ...
	I1205 20:37:25.535339  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:37:25.535367  310801 start.go:255] writing updated cluster config ...
	I1205 20:37:25.535725  310801 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:25.590118  310801 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:25.592310  310801 out.go:177] * Done! kubectl is now configured to use "ha-689539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.696153707Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431278696128410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6397af34-1566-40b6-933c-db72171649d2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.696649750Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=464de6f5-5645-43c3-8fdc-0eddee427293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.696722241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=464de6f5-5645-43c3-8fdc-0eddee427293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.696955839Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=464de6f5-5645-43c3-8fdc-0eddee427293 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.742945147Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=17216372-fa26-48f3-b4c0-e0ff2a695393 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.743065669Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=17216372-fa26-48f3-b4c0-e0ff2a695393 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.744668283Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=95fa12f2-d7d6-46db-885b-6b8d1f0d0f5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.745096624Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431278745074324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=95fa12f2-d7d6-46db-885b-6b8d1f0d0f5a name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.745629154Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=246b21d0-fa42-4ef6-b4aa-8dd8ebae8ceb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.745718584Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=246b21d0-fa42-4ef6-b4aa-8dd8ebae8ceb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.745951204Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=246b21d0-fa42-4ef6-b4aa-8dd8ebae8ceb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.785621807Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=69ec80e8-9c76-4c9c-9f33-c34022940d83 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.785695691Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=69ec80e8-9c76-4c9c-9f33-c34022940d83 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.786575809Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=968e15d8-b877-4b2f-b4f1-f9829220ed2c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.787039634Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431278786973248,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=968e15d8-b877-4b2f-b4f1-f9829220ed2c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.787575766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5fa37059-8664-48bf-9f46-a05626017b23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.787632750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5fa37059-8664-48bf-9f46-a05626017b23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.787865716Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5fa37059-8664-48bf-9f46-a05626017b23 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.824461769Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=943a7527-ed94-4f5d-b6e2-59ddbe45c0f3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.824535297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=943a7527-ed94-4f5d-b6e2-59ddbe45c0f3 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.825704915Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=369d0fe7-2029-4816-8221-6f60173f9270 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.826393813Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431278826370578,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=369d0fe7-2029-4816-8221-6f60173f9270 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.827753458Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=da47a3a7-ce03-49c2-8494-7576d7c2d10c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.827812852Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=da47a3a7-ce03-49c2-8494-7576d7c2d10c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:18 ha-689539 crio[658]: time="2024-12-05 20:41:18.828046935Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=da47a3a7-ce03-49c2-8494-7576d7c2d10c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	77e0f8ba49070       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a35c5864db38       busybox-7dff88458-qjqvr
	05a6cfcd7e9ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   984c3b3f8fe03       coredns-7c65d6cfc9-4ln9l
	c6007ba446b77       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a344cd0e9a251       coredns-7c65d6cfc9-6qhhf
	74e8c78df0a6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d7a154f9d8020       storage-provisioner
	0809642e9449b       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   faeac762b1689       kindnet-62qw6
	0a16a5003f863       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   6bc6d79587a62       kube-proxy-9tslx
	4431afbd69d99       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   ae658c6069b44       kube-vip-ha-689539
	1e9238618cdfe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   110f95e5235df       etcd-ha-689539
	2033f56968a9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6058ddd3ee58       kube-scheduler-ha-689539
	cd2211f15ae3c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   f650305b876ca       kube-apiserver-ha-689539
	4a056592a0f93       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   6d5d1a1329844       kube-controller-manager-ha-689539
	
	
	==> coredns [05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc] <==
	[INFO] 10.244.0.4:44188 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002182194s
	[INFO] 10.244.1.2:41292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169551s
	[INFO] 10.244.1.2:38453 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003584311s
	[INFO] 10.244.1.2:36084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201777s
	[INFO] 10.244.1.2:49408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133503s
	[INFO] 10.244.2.2:51533 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117849s
	[INFO] 10.244.2.2:34176 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018539s
	[INFO] 10.244.2.2:43670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178861s
	[INFO] 10.244.2.2:56974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148401s
	[INFO] 10.244.0.4:48841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170335s
	[INFO] 10.244.0.4:43111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409238s
	[INFO] 10.244.0.4:36893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093314s
	[INFO] 10.244.0.4:50555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104324s
	[INFO] 10.244.1.2:43568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116735s
	[INFO] 10.244.1.2:44480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066571s
	[INFO] 10.244.1.2:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058674s
	[INFO] 10.244.2.2:49472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121084s
	[INFO] 10.244.0.4:57046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160079s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119738s
	[INFO] 10.244.1.2:37203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178276s
	[INFO] 10.244.1.2:59196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213381s
	[INFO] 10.244.1.2:41969 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159543s
	[INFO] 10.244.1.2:60294 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120046s
	[INFO] 10.244.2.2:42519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177647s
	[INFO] 10.244.0.4:60229 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056377s
	
	
	==> coredns [c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a] <==
	[INFO] 10.244.0.4:55355 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000054352s
	[INFO] 10.244.1.2:33933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161165s
	[INFO] 10.244.1.2:37174 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884442s
	[INFO] 10.244.1.2:41634 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152882s
	[INFO] 10.244.1.2:60548 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176047s
	[INFO] 10.244.2.2:32947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146675s
	[INFO] 10.244.2.2:60319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949836s
	[INFO] 10.244.2.2:48727 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001337037s
	[INFO] 10.244.2.2:56733 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149582s
	[INFO] 10.244.0.4:58646 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001891441s
	[INFO] 10.244.0.4:55352 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164932s
	[INFO] 10.244.0.4:54745 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100872s
	[INFO] 10.244.0.4:51217 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122097s
	[INFO] 10.244.1.2:52959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137256s
	[INFO] 10.244.2.2:52934 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147111s
	[INFO] 10.244.2.2:34173 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119001s
	[INFO] 10.244.2.2:41909 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126707s
	[INFO] 10.244.0.4:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120087s
	[INFO] 10.244.0.4:35647 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218624s
	[INFO] 10.244.2.2:51797 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211308s
	[INFO] 10.244.2.2:38193 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207361s
	[INFO] 10.244.2.2:55117 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135379s
	[INFO] 10.244.0.4:46265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114618s
	[INFO] 10.244.0.4:43082 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145713s
	[INFO] 10.244.0.4:59763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071668s
	
	
	==> describe nodes <==
	Name:               ha-689539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-689539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fcfe17cf29247c89ef6261408cdec57
	  System UUID:                3fcfe17c-f292-47c8-9ef6-261408cdec57
	  Boot ID:                    0967c504-1cf1-4d64-84b3-abc762e82552
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qjqvr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-4ln9l             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 coredns-7c65d6cfc9-6qhhf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m26s
	  kube-system                 etcd-ha-689539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m30s
	  kube-system                 kindnet-62qw6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m26s
	  kube-system                 kube-apiserver-ha-689539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-controller-manager-ha-689539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-proxy-9tslx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	  kube-system                 kube-scheduler-ha-689539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 kube-vip-ha-689539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m30s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m24s  kube-proxy       
	  Normal  Starting                 6m30s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m30s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m30s  kubelet          Node ha-689539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s  kubelet          Node ha-689539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m30s  kubelet          Node ha-689539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m26s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  NodeReady                6m9s   kubelet          Node ha-689539 status is now: NodeReady
	  Normal  RegisteredNode           5m26s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  RegisteredNode           4m13s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	
	
	Name:               ha-689539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:38:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-689539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2527423e09b7455fb49f08b5007d8aaf
	  System UUID:                2527423e-09b7-455f-b49f-08b5007d8aaf
	  Boot ID:                    693fb661-afc0-4a4b-8d66-7434b8ba3be0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7ss94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-689539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m33s
	  kube-system                 kindnet-b7bf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m34s
	  kube-system                 kube-apiserver-ha-689539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m33s
	  kube-system                 kube-controller-manager-ha-689539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m25s
	  kube-system                 kube-proxy-x2grl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-scheduler-ha-689539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m28s
	  kube-system                 kube-vip-ha-689539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m30s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m34s (x8 over 5m35s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m34s (x8 over 5m35s)  kubelet          Node ha-689539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m34s (x7 over 5m35s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m31s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  NodeNotReady             118s                   node-controller  Node ha-689539-m02 status is now: NodeNotReady
	
	
	Name:               ha-689539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-689539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23c133dbe3f244679269ca86c6b2111d
	  System UUID:                23c133db-e3f2-4467-9269-ca86c6b2111d
	  Boot ID:                    72ade07d-4013-4096-9862-81be930c4b6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ns455                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 etcd-ha-689539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m19s
	  kube-system                 kindnet-8kgs2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m21s
	  kube-system                 kube-apiserver-ha-689539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m19s
	  kube-system                 kube-controller-manager-ha-689539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m11s
	  kube-system                 kube-proxy-dktwc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m21s
	  kube-system                 kube-scheduler-ha-689539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m18s
	  kube-system                 kube-vip-ha-689539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m16s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m16s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m21s (x8 over 4m21s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m21s (x8 over 4m21s)  kubelet          Node ha-689539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m21s (x7 over 4m21s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m16s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m13s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	
	
	Name:               ha-689539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_38_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    ha-689539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d82a84b2609b470c8ddc16781015ee6d
	  System UUID:                d82a84b2-609b-470c-8ddc-16781015ee6d
	  Boot ID:                    c6aff0b9-eb25-4035-add5-dcc47c5c8348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9xbpp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m14s
	  kube-system                 kube-proxy-kpbrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m9s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  3m14s (x2 over 3m15s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m14s (x2 over 3m15s)  kubelet          Node ha-689539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m14s (x2 over 3m15s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m13s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m11s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  NodeReady                2m54s                  kubelet          Node ha-689539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039465] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.016771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.614002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.712547] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.063478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058841] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.182620] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.134116] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.286058] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.983127] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.083666] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.057216] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.189676] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.088639] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.119203] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.279281] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42] <==
	{"level":"warn","ts":"2024-12-05T20:41:18.691717Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:18.730861Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:18.790917Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:18.890956Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:18.990900Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.120214Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.126860Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.135306Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.138957Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.142802Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.143431Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.150432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.156098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.161963Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.166374Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.169911Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.177482Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.183181Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.191358Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.191677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.195049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.198176Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.201666Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.206792Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:19.212049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:41:19 up 7 min,  0 users,  load average: 0.30, 0.26, 0.12
	Linux ha-689539 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61] <==
	I1205 20:40:39.975671       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:49.971686       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:49.971811       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:49.972022       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:49.972032       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:40:49.972125       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:49.972132       1 main.go:301] handling current node
	I1205 20:40:49.972143       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:49.972147       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972467       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:59.972574       1 main.go:301] handling current node
	I1205 20:40:59.972604       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:59.972621       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972884       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:59.972920       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:59.973088       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:59.973124       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:41:09.973378       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:41:09.973428       1 main.go:301] handling current node
	I1205 20:41:09.973445       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:41:09.973450       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:41:09.973693       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:41:09.973706       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:41:09.973839       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:41:09.973846       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19] <==
	W1205 20:34:48.005731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220]
	I1205 20:34:48.006729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:34:48.014987       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:34:48.223693       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:34:49.561495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:34:49.580677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:34:49.727059       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:34:53.679365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:34:53.876376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1205 20:37:30.985923       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44596: use of closed network connection
	E1205 20:37:31.179622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44600: use of closed network connection
	E1205 20:37:31.382888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44610: use of closed network connection
	E1205 20:37:31.582068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44622: use of closed network connection
	E1205 20:37:31.774198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44652: use of closed network connection
	E1205 20:37:31.958030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44666: use of closed network connection
	E1205 20:37:32.140428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44686: use of closed network connection
	E1205 20:37:32.322775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44704: use of closed network connection
	E1205 20:37:32.515908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44718: use of closed network connection
	E1205 20:37:32.837161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44756: use of closed network connection
	E1205 20:37:33.022723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44776: use of closed network connection
	E1205 20:37:33.209590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44790: use of closed network connection
	E1205 20:37:33.392904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44808: use of closed network connection
	E1205 20:37:33.581589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44830: use of closed network connection
	E1205 20:37:33.765728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44852: use of closed network connection
	W1205 20:38:58.016885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.220]
	
	
	==> kube-controller-manager [4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2] <==
	I1205 20:38:05.497632       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-689539-m04" podCIDRs=["10.244.3.0/24"]
	I1205 20:38:05.497693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.497786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.524265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.322551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.681995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.924972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.069639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.145190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.229546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.230026       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-689539-m04"
	I1205 20:38:08.272217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:15.550194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:38:25.164347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:26.915918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:36.091312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:39:21.941441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.941592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:39:21.962901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.988464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.390336ms"
	I1205 20:39:21.988772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="153.307µs"
	I1205 20:39:23.353917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:27.137479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	
	
	==> kube-proxy [0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:34:54.543864       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:34:54.553756       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	E1205 20:34:54.553891       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:34:54.586394       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:34:54.586517       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:34:54.586562       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:34:54.589547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:34:54.589875       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:34:54.589968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:34:54.592476       1 config.go:199] "Starting service config controller"
	I1205 20:34:54.594797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:34:54.592516       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:34:54.594853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:34:54.600348       1 config.go:328] "Starting node config controller"
	I1205 20:34:54.601332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:34:54.695425       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:34:54.695636       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:34:54.701955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668] <==
	E1205 20:34:47.293214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.324868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:34:47.324938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.340705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:34:47.340848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.360711       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:34:47.360829       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:34:47.402644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:34:47.402751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.409130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:34:47.409228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.580992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:34:47.581091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:34:49.941328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 20:37:26.487849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.487974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c47c5104-83dc-428d-8ded-5175eff6643c(default/busybox-7dff88458-ns455) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ns455"
	E1205 20:37:26.488011       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" pod="default/busybox-7dff88458-ns455"
	I1205 20:37:26.488039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.529460       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:37:26.531731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" pod="default/busybox-7dff88458-qjqvr"
	I1205 20:37:26.532951       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:38:05.558984       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	E1205 20:38:05.565872       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d09bad-5a47-45ec-b467-0231a40ad9f0(kube-system/kindnet-mqzp5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mqzp5"
	E1205 20:38:05.566103       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" pod="kube-system/kindnet-mqzp5"
	I1205 20:38:05.566218       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	
	
	==> kubelet <==
	Dec 05 20:39:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:39:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801882    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801906    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.803793    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.804270    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807394    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807450    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811009    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811103    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812356    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812422    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814301    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814613    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.759293    1297 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816382    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816591    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821073    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821410    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823458    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823549    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (6.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.881906755s)
ha_test.go:309: expected profile "ha-689539" in json of 'profile list' to have "HAppy" status but have "Unknown" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-689539\",\"Status\":\"Unknown\",\"Config\":{\"Name\":\"ha-689539\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"kvm2\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\
"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.31.2\",\"ClusterName\":\"ha-689539\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.39.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.39.220\",\"Port\":8443,\"Kuberne
tesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.168.39.224\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.168.39.133\",\"Port\":8443,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.39.199\",\"Port\":0,\"KubernetesVersion\":\"v1.31.2\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt
\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/home/jenkins:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",
\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (1.388489241s)
helpers_test.go:252: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m03_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-689539 node start m02 -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:34:08
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:34:08.074114  310801 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:34:08.074261  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074272  310801 out.go:358] Setting ErrFile to fd 2...
	I1205 20:34:08.074277  310801 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:34:08.074494  310801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:34:08.075118  310801 out.go:352] Setting JSON to false
	I1205 20:34:08.076226  310801 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11796,"bootTime":1733419052,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:34:08.076305  310801 start.go:139] virtualization: kvm guest
	I1205 20:34:08.078657  310801 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:34:08.080623  310801 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:34:08.080628  310801 notify.go:220] Checking for updates...
	I1205 20:34:08.083473  310801 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:34:08.084883  310801 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:08.086219  310801 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.087594  310801 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:34:08.088859  310801 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:34:08.090289  310801 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:34:08.128174  310801 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 20:34:08.129457  310801 start.go:297] selected driver: kvm2
	I1205 20:34:08.129474  310801 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:34:08.129492  310801 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:34:08.130313  310801 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.130391  310801 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:34:08.148061  310801 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:34:08.148119  310801 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:34:08.148394  310801 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:34:08.148426  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:08.148467  310801 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I1205 20:34:08.148479  310801 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1205 20:34:08.148546  310801 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:08.148670  310801 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:34:08.150579  310801 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:34:08.152101  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:08.152144  310801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:34:08.152158  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:08.152281  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:08.152296  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:08.152605  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:08.152651  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json: {Name:mk27baab499187c123d1f411d3400f014a73dd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:08.152842  310801 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:08.152881  310801 start.go:364] duration metric: took 21.06µs to acquireMachinesLock for "ha-689539"
	I1205 20:34:08.152908  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:08.152972  310801 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 20:34:08.154751  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:08.154908  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:08.154972  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:08.170934  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46169
	I1205 20:34:08.171495  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:08.172063  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:08.172087  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:08.172451  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:08.172674  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:08.172837  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:08.172996  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:08.173045  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:08.173086  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:08.173121  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173139  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173198  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:08.173225  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:08.173243  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:08.173268  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:08.173282  310801 main.go:141] libmachine: (ha-689539) Calling .PreCreateCheck
	I1205 20:34:08.173629  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:08.174111  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:08.174129  310801 main.go:141] libmachine: (ha-689539) Calling .Create
	I1205 20:34:08.174265  310801 main.go:141] libmachine: (ha-689539) Creating KVM machine...
	I1205 20:34:08.175744  310801 main.go:141] libmachine: (ha-689539) DBG | found existing default KVM network
	I1205 20:34:08.176445  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.176315  310824 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000221330}
	I1205 20:34:08.176491  310801 main.go:141] libmachine: (ha-689539) DBG | created network xml: 
	I1205 20:34:08.176507  310801 main.go:141] libmachine: (ha-689539) DBG | <network>
	I1205 20:34:08.176530  310801 main.go:141] libmachine: (ha-689539) DBG |   <name>mk-ha-689539</name>
	I1205 20:34:08.176545  310801 main.go:141] libmachine: (ha-689539) DBG |   <dns enable='no'/>
	I1205 20:34:08.176564  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176591  310801 main.go:141] libmachine: (ha-689539) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I1205 20:34:08.176606  310801 main.go:141] libmachine: (ha-689539) DBG |     <dhcp>
	I1205 20:34:08.176611  310801 main.go:141] libmachine: (ha-689539) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I1205 20:34:08.176616  310801 main.go:141] libmachine: (ha-689539) DBG |     </dhcp>
	I1205 20:34:08.176621  310801 main.go:141] libmachine: (ha-689539) DBG |   </ip>
	I1205 20:34:08.176666  310801 main.go:141] libmachine: (ha-689539) DBG |   
	I1205 20:34:08.176693  310801 main.go:141] libmachine: (ha-689539) DBG | </network>
	I1205 20:34:08.176707  310801 main.go:141] libmachine: (ha-689539) DBG | 
	I1205 20:34:08.181749  310801 main.go:141] libmachine: (ha-689539) DBG | trying to create private KVM network mk-ha-689539 192.168.39.0/24...
	I1205 20:34:08.259729  310801 main.go:141] libmachine: (ha-689539) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.259779  310801 main.go:141] libmachine: (ha-689539) DBG | private KVM network mk-ha-689539 192.168.39.0/24 created
	I1205 20:34:08.259792  310801 main.go:141] libmachine: (ha-689539) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:08.259831  310801 main.go:141] libmachine: (ha-689539) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:08.259902  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.259565  310824 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.570701  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.570509  310824 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa...
	I1205 20:34:08.656946  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656740  310824 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk...
	I1205 20:34:08.656979  310801 main.go:141] libmachine: (ha-689539) DBG | Writing magic tar header
	I1205 20:34:08.656999  310801 main.go:141] libmachine: (ha-689539) DBG | Writing SSH key tar header
	I1205 20:34:08.657012  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:08.656919  310824 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 ...
	I1205 20:34:08.657032  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539
	I1205 20:34:08.657155  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539 (perms=drwx------)
	I1205 20:34:08.657196  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:08.657214  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:08.657237  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:08.657251  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:08.657266  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:08.657283  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:08.657297  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:08.657313  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:08.657327  310801 main.go:141] libmachine: (ha-689539) DBG | Checking permissions on dir: /home
	I1205 20:34:08.657340  310801 main.go:141] libmachine: (ha-689539) DBG | Skipping /home - not owner
	I1205 20:34:08.657354  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:08.657370  310801 main.go:141] libmachine: (ha-689539) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:08.657383  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:08.658677  310801 main.go:141] libmachine: (ha-689539) define libvirt domain using xml: 
	I1205 20:34:08.658706  310801 main.go:141] libmachine: (ha-689539) <domain type='kvm'>
	I1205 20:34:08.658718  310801 main.go:141] libmachine: (ha-689539)   <name>ha-689539</name>
	I1205 20:34:08.658725  310801 main.go:141] libmachine: (ha-689539)   <memory unit='MiB'>2200</memory>
	I1205 20:34:08.658735  310801 main.go:141] libmachine: (ha-689539)   <vcpu>2</vcpu>
	I1205 20:34:08.658745  310801 main.go:141] libmachine: (ha-689539)   <features>
	I1205 20:34:08.658752  310801 main.go:141] libmachine: (ha-689539)     <acpi/>
	I1205 20:34:08.658759  310801 main.go:141] libmachine: (ha-689539)     <apic/>
	I1205 20:34:08.658767  310801 main.go:141] libmachine: (ha-689539)     <pae/>
	I1205 20:34:08.658787  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.658823  310801 main.go:141] libmachine: (ha-689539)   </features>
	I1205 20:34:08.658849  310801 main.go:141] libmachine: (ha-689539)   <cpu mode='host-passthrough'>
	I1205 20:34:08.658858  310801 main.go:141] libmachine: (ha-689539)   
	I1205 20:34:08.658863  310801 main.go:141] libmachine: (ha-689539)   </cpu>
	I1205 20:34:08.658869  310801 main.go:141] libmachine: (ha-689539)   <os>
	I1205 20:34:08.658874  310801 main.go:141] libmachine: (ha-689539)     <type>hvm</type>
	I1205 20:34:08.658880  310801 main.go:141] libmachine: (ha-689539)     <boot dev='cdrom'/>
	I1205 20:34:08.658885  310801 main.go:141] libmachine: (ha-689539)     <boot dev='hd'/>
	I1205 20:34:08.658892  310801 main.go:141] libmachine: (ha-689539)     <bootmenu enable='no'/>
	I1205 20:34:08.658896  310801 main.go:141] libmachine: (ha-689539)   </os>
	I1205 20:34:08.658902  310801 main.go:141] libmachine: (ha-689539)   <devices>
	I1205 20:34:08.658909  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='cdrom'>
	I1205 20:34:08.658920  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/boot2docker.iso'/>
	I1205 20:34:08.658932  310801 main.go:141] libmachine: (ha-689539)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:08.658940  310801 main.go:141] libmachine: (ha-689539)       <readonly/>
	I1205 20:34:08.658954  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.658974  310801 main.go:141] libmachine: (ha-689539)     <disk type='file' device='disk'>
	I1205 20:34:08.658987  310801 main.go:141] libmachine: (ha-689539)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:08.659004  310801 main.go:141] libmachine: (ha-689539)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/ha-689539.rawdisk'/>
	I1205 20:34:08.659016  310801 main.go:141] libmachine: (ha-689539)       <target dev='hda' bus='virtio'/>
	I1205 20:34:08.659054  310801 main.go:141] libmachine: (ha-689539)     </disk>
	I1205 20:34:08.659076  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659087  310801 main.go:141] libmachine: (ha-689539)       <source network='mk-ha-689539'/>
	I1205 20:34:08.659094  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659106  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659117  310801 main.go:141] libmachine: (ha-689539)     <interface type='network'>
	I1205 20:34:08.659126  310801 main.go:141] libmachine: (ha-689539)       <source network='default'/>
	I1205 20:34:08.659140  310801 main.go:141] libmachine: (ha-689539)       <model type='virtio'/>
	I1205 20:34:08.659151  310801 main.go:141] libmachine: (ha-689539)     </interface>
	I1205 20:34:08.659160  310801 main.go:141] libmachine: (ha-689539)     <serial type='pty'>
	I1205 20:34:08.659167  310801 main.go:141] libmachine: (ha-689539)       <target port='0'/>
	I1205 20:34:08.659176  310801 main.go:141] libmachine: (ha-689539)     </serial>
	I1205 20:34:08.659185  310801 main.go:141] libmachine: (ha-689539)     <console type='pty'>
	I1205 20:34:08.659196  310801 main.go:141] libmachine: (ha-689539)       <target type='serial' port='0'/>
	I1205 20:34:08.659214  310801 main.go:141] libmachine: (ha-689539)     </console>
	I1205 20:34:08.659224  310801 main.go:141] libmachine: (ha-689539)     <rng model='virtio'>
	I1205 20:34:08.659233  310801 main.go:141] libmachine: (ha-689539)       <backend model='random'>/dev/random</backend>
	I1205 20:34:08.659242  310801 main.go:141] libmachine: (ha-689539)     </rng>
	I1205 20:34:08.659248  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659252  310801 main.go:141] libmachine: (ha-689539)     
	I1205 20:34:08.659260  310801 main.go:141] libmachine: (ha-689539)   </devices>
	I1205 20:34:08.659270  310801 main.go:141] libmachine: (ha-689539) </domain>
	I1205 20:34:08.659282  310801 main.go:141] libmachine: (ha-689539) 
	I1205 20:34:08.664073  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:a3:09:de in network default
	I1205 20:34:08.664657  310801 main.go:141] libmachine: (ha-689539) Ensuring networks are active...
	I1205 20:34:08.664680  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:08.665393  310801 main.go:141] libmachine: (ha-689539) Ensuring network default is active
	I1205 20:34:08.665790  310801 main.go:141] libmachine: (ha-689539) Ensuring network mk-ha-689539 is active
	I1205 20:34:08.666343  310801 main.go:141] libmachine: (ha-689539) Getting domain xml...
	I1205 20:34:08.667190  310801 main.go:141] libmachine: (ha-689539) Creating domain...
	I1205 20:34:09.889755  310801 main.go:141] libmachine: (ha-689539) Waiting to get IP...
	I1205 20:34:09.890610  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:09.890981  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:09.891034  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:09.890969  310824 retry.go:31] will retry after 284.885869ms: waiting for machine to come up
	I1205 20:34:10.177621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.178156  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.178184  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.178109  310824 retry.go:31] will retry after 378.211833ms: waiting for machine to come up
	I1205 20:34:10.557655  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:10.558178  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:10.558212  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:10.558123  310824 retry.go:31] will retry after 473.788163ms: waiting for machine to come up
	I1205 20:34:11.033830  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.034246  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.034277  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.034195  310824 retry.go:31] will retry after 418.138315ms: waiting for machine to come up
	I1205 20:34:11.453849  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:11.454287  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:11.454318  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:11.454229  310824 retry.go:31] will retry after 720.041954ms: waiting for machine to come up
	I1205 20:34:12.176162  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.176610  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.176635  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.176551  310824 retry.go:31] will retry after 769.230458ms: waiting for machine to come up
	I1205 20:34:12.947323  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:12.947645  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:12.947682  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:12.947615  310824 retry.go:31] will retry after 799.111179ms: waiting for machine to come up
	I1205 20:34:13.748171  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:13.748640  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:13.748669  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:13.748592  310824 retry.go:31] will retry after 1.052951937s: waiting for machine to come up
	I1205 20:34:14.802913  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:14.803309  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:14.803340  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:14.803262  310824 retry.go:31] will retry after 1.685899285s: waiting for machine to come up
	I1205 20:34:16.491286  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:16.491828  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:16.491858  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:16.491779  310824 retry.go:31] will retry after 1.722453601s: waiting for machine to come up
	I1205 20:34:18.215846  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:18.216281  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:18.216316  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:18.216229  310824 retry.go:31] will retry after 1.847118783s: waiting for machine to come up
	I1205 20:34:20.066408  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:20.066971  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:20.067002  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:20.066922  310824 retry.go:31] will retry after 2.216585531s: waiting for machine to come up
	I1205 20:34:22.284845  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:22.285380  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:22.285409  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:22.285296  310824 retry.go:31] will retry after 4.35742756s: waiting for machine to come up
	I1205 20:34:26.646498  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:26.646898  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find current IP address of domain ha-689539 in network mk-ha-689539
	I1205 20:34:26.646925  310801 main.go:141] libmachine: (ha-689539) DBG | I1205 20:34:26.646863  310824 retry.go:31] will retry after 4.830110521s: waiting for machine to come up
	I1205 20:34:31.481950  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.482551  310801 main.go:141] libmachine: (ha-689539) Found IP for machine: 192.168.39.220
	I1205 20:34:31.482584  310801 main.go:141] libmachine: (ha-689539) Reserving static IP address...
	I1205 20:34:31.482599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has current primary IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.483029  310801 main.go:141] libmachine: (ha-689539) DBG | unable to find host DHCP lease matching {name: "ha-689539", mac: "52:54:00:92:19:fb", ip: "192.168.39.220"} in network mk-ha-689539
	I1205 20:34:31.565523  310801 main.go:141] libmachine: (ha-689539) Reserved static IP address: 192.168.39.220
	I1205 20:34:31.565552  310801 main.go:141] libmachine: (ha-689539) Waiting for SSH to be available...
	I1205 20:34:31.565561  310801 main.go:141] libmachine: (ha-689539) DBG | Getting to WaitForSSH function...
	I1205 20:34:31.568330  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568827  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.568862  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.568958  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH client type: external
	I1205 20:34:31.568991  310801 main.go:141] libmachine: (ha-689539) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa (-rw-------)
	I1205 20:34:31.569027  310801 main.go:141] libmachine: (ha-689539) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:34:31.569037  310801 main.go:141] libmachine: (ha-689539) DBG | About to run SSH command:
	I1205 20:34:31.569050  310801 main.go:141] libmachine: (ha-689539) DBG | exit 0
	I1205 20:34:31.694133  310801 main.go:141] libmachine: (ha-689539) DBG | SSH cmd err, output: <nil>: 
	I1205 20:34:31.694455  310801 main.go:141] libmachine: (ha-689539) KVM machine creation complete!
	I1205 20:34:31.694719  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:31.695354  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695562  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:31.695749  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:34:31.695765  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:31.697139  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:34:31.697166  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:34:31.697171  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:34:31.697176  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.699900  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700272  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.700328  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.700454  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.700642  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700807  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.700983  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.701155  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.701416  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.701430  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:34:31.797327  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:31.797354  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:34:31.797363  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.800489  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.800822  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.800853  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.801025  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.801240  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801464  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.801591  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.801777  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.801991  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.802002  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:34:31.902674  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:34:31.902768  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:34:31.902779  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:34:31.902787  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903088  310801 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:34:31.903116  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:31.903428  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:31.906237  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906571  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:31.906599  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:31.906752  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:31.906940  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907099  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:31.907232  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:31.907446  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:31.907634  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:31.907655  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:34:32.020236  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:34:32.020265  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.023604  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.023912  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.023942  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.024133  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.024345  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024501  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.024686  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.024863  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.025085  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.025111  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:34:32.131661  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:34:32.131696  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:34:32.131742  310801 buildroot.go:174] setting up certificates
	I1205 20:34:32.131755  310801 provision.go:84] configureAuth start
	I1205 20:34:32.131768  310801 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:34:32.132088  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.135389  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.135787  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.135825  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.136069  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.138588  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.138916  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.138949  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.139086  310801 provision.go:143] copyHostCerts
	I1205 20:34:32.139123  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139178  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:34:32.139206  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:34:32.139295  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:34:32.139433  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139460  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:34:32.139468  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:34:32.139515  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:34:32.139597  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139626  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:34:32.139634  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:34:32.139671  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:34:32.139758  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:34:32.367430  310801 provision.go:177] copyRemoteCerts
	I1205 20:34:32.367531  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:34:32.367565  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.370702  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371025  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.371063  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.371206  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.371413  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.371586  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.371717  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.452327  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:34:32.452426  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:34:32.476869  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:34:32.476958  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:34:32.501389  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:34:32.501501  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:34:32.525226  310801 provision.go:87] duration metric: took 393.452946ms to configureAuth
	I1205 20:34:32.525267  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:34:32.525488  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:32.525609  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.528470  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.528833  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.528864  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.529057  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.529285  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529497  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.529678  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.529839  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.530046  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.530066  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:34:32.733723  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:34:32.733755  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:34:32.733816  310801 main.go:141] libmachine: (ha-689539) Calling .GetURL
	I1205 20:34:32.735231  310801 main.go:141] libmachine: (ha-689539) DBG | Using libvirt version 6000000
	I1205 20:34:32.737329  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737769  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.737804  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.737993  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:34:32.738008  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:34:32.738015  310801 client.go:171] duration metric: took 24.564959064s to LocalClient.Create
	I1205 20:34:32.738046  310801 start.go:167] duration metric: took 24.565052554s to libmachine.API.Create "ha-689539"
	I1205 20:34:32.738061  310801 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:34:32.738073  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:34:32.738096  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.738400  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:34:32.738433  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.740621  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.740891  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.740921  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.741034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.741256  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.741431  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.741595  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.820810  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:34:32.825193  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:34:32.825227  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:34:32.825326  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:34:32.825428  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:34:32.825442  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:34:32.825556  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:34:32.835549  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:32.859405  310801 start.go:296] duration metric: took 121.327589ms for postStartSetup
	I1205 20:34:32.859464  310801 main.go:141] libmachine: (ha-689539) Calling .GetConfigRaw
	I1205 20:34:32.860144  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.862916  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863271  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.863303  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.863582  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:32.863831  310801 start.go:128] duration metric: took 24.710845565s to createHost
	I1205 20:34:32.863871  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.866291  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866627  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.866656  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.866902  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.867141  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867419  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.867570  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.867744  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:34:32.867965  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:34:32.867993  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:34:32.966710  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430872.933221119
	
	I1205 20:34:32.966748  310801 fix.go:216] guest clock: 1733430872.933221119
	I1205 20:34:32.966760  310801 fix.go:229] Guest: 2024-12-05 20:34:32.933221119 +0000 UTC Remote: 2024-12-05 20:34:32.863851557 +0000 UTC m=+24.831728555 (delta=69.369562ms)
	I1205 20:34:32.966789  310801 fix.go:200] guest clock delta is within tolerance: 69.369562ms
	I1205 20:34:32.966794  310801 start.go:83] releasing machines lock for "ha-689539", held for 24.813901478s
	I1205 20:34:32.966815  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.967103  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:32.970285  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970747  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.970797  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.970954  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971526  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971766  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:32.971872  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:34:32.971926  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.972023  310801 ssh_runner.go:195] Run: cat /version.json
	I1205 20:34:32.972052  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:32.975300  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975606  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975666  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.975696  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.975901  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976142  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976160  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:32.976211  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:32.976432  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:32.976440  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.976647  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:32.976668  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:32.976855  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:32.977003  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:33.059386  310801 ssh_runner.go:195] Run: systemctl --version
	I1205 20:34:33.082247  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:34:33.243513  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:34:33.249633  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:34:33.249718  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:34:33.266578  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:34:33.266607  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:34:33.266691  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:34:33.282457  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:34:33.296831  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:34:33.296976  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:34:33.310872  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:34:33.324245  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:34:33.436767  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:34:33.589248  310801 docker.go:233] disabling docker service ...
	I1205 20:34:33.589369  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:34:33.604397  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:34:33.617678  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:34:33.755936  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:34:33.876879  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:34:33.890218  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:34:33.907910  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:34:33.907992  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.918057  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:34:33.918138  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.928622  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.938873  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.949059  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:34:33.959639  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.970025  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.986937  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:34:33.997151  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:34:34.006323  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:34:34.006391  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:34:34.019434  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:34:34.029027  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:34.156535  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:34:34.246656  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:34:34.246735  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:34:34.251273  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:34:34.251340  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:34:34.254861  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:34:34.290093  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:34:34.290181  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.319140  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:34:34.349724  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:34:34.351134  310801 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:34:34.354155  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354477  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:34.354499  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:34.354753  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:34:34.358724  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:34.371098  310801 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:34:34.371240  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:34.371296  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:34.405312  310801 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 20:34:34.405419  310801 ssh_runner.go:195] Run: which lz4
	I1205 20:34:34.409438  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I1205 20:34:34.409558  310801 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 20:34:34.413636  310801 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 20:34:34.413680  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 20:34:35.688964  310801 crio.go:462] duration metric: took 1.279440398s to copy over tarball
	I1205 20:34:35.689045  310801 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 20:34:37.772729  310801 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.083628711s)
	I1205 20:34:37.772773  310801 crio.go:469] duration metric: took 2.083775707s to extract the tarball
	I1205 20:34:37.772784  310801 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 20:34:37.810322  310801 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:34:37.853195  310801 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:34:37.853229  310801 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:34:37.853239  310801 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:34:37.853389  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:34:37.853483  310801 ssh_runner.go:195] Run: crio config
	I1205 20:34:37.904941  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:37.904967  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:37.904981  310801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:34:37.905015  310801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:34:37.905154  310801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:34:37.905183  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:34:37.905229  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:34:37.920877  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:34:37.921012  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:34:37.921087  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:34:37.930861  310801 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:34:37.930952  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:34:37.940283  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:34:37.956877  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:34:37.973504  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:34:37.990145  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I1205 20:34:38.006265  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:34:38.010189  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:34:38.022257  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:34:38.140067  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:34:38.157890  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:34:38.157932  310801 certs.go:194] generating shared ca certs ...
	I1205 20:34:38.157956  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.158149  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:34:38.158208  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:34:38.158222  310801 certs.go:256] generating profile certs ...
	I1205 20:34:38.158295  310801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:34:38.158314  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt with IP's: []
	I1205 20:34:38.310974  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt ...
	I1205 20:34:38.311018  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt: {Name:mkf3aecb8b9ad227608c6977c2ad30cfc55949b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311241  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key ...
	I1205 20:34:38.311266  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key: {Name:mkfab3a0d79e1baa864757b84edfb7968d976df8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.311382  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772
	I1205 20:34:38.311402  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.254]
	I1205 20:34:38.414671  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 ...
	I1205 20:34:38.414714  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772: {Name:mkc29737ec8270e2af482fa3e0afb3df1551e296 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.414925  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 ...
	I1205 20:34:38.414944  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772: {Name:mk5a1762b7078753229c19ae4d408dd983181bad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.415108  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:34:38.415228  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.4e36e772 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:34:38.415320  310801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:34:38.415337  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt with IP's: []
	I1205 20:34:38.595265  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt ...
	I1205 20:34:38.595307  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt: {Name:mke4b60d010e9a42985a4147d8ca20fd58cfe926 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595513  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key ...
	I1205 20:34:38.595526  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key: {Name:mkc40847c87fbb64accdbdfed18b0a1220dd4fb2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:38.595607  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:34:38.595627  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:34:38.595641  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:34:38.595656  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:34:38.595671  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:34:38.595687  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:34:38.595702  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:34:38.595721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:34:38.595781  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:34:38.595820  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:34:38.595832  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:34:38.595867  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:34:38.595927  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:34:38.595965  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:34:38.596013  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:34:38.596047  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.596065  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.596080  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.596679  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:34:38.621836  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:34:38.645971  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:34:38.669572  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:34:38.692394  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 20:34:38.714950  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:34:38.737673  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:34:38.760143  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:34:38.782837  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:34:38.804959  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:34:38.827699  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:34:38.850292  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:34:38.866443  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:34:38.872267  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:34:38.883530  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887895  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.887977  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:34:38.893617  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:34:38.906999  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:34:38.918595  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924117  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.924185  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:34:38.932047  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:34:38.945495  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:34:38.961962  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966385  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.966443  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:34:38.971854  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:34:38.983000  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:34:38.987127  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:34:38.987198  310801 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:34:38.987278  310801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:34:38.987360  310801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:34:39.023266  310801 cri.go:89] found id: ""
	I1205 20:34:39.023363  310801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 20:34:39.033877  310801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 20:34:39.044224  310801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 20:34:39.054571  310801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 20:34:39.054597  310801 kubeadm.go:157] found existing configuration files:
	
	I1205 20:34:39.054653  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 20:34:39.064431  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 20:34:39.064513  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 20:34:39.074366  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 20:34:39.083912  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 20:34:39.083984  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 20:34:39.093938  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.103398  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 20:34:39.103465  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 20:34:39.113094  310801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 20:34:39.122507  310801 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 20:34:39.122597  310801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 20:34:39.132005  310801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 20:34:39.228908  310801 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 20:34:39.229049  310801 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 20:34:39.329735  310801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 20:34:39.329925  310801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 20:34:39.330069  310801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 20:34:39.340103  310801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 20:34:39.373910  310801 out.go:235]   - Generating certificates and keys ...
	I1205 20:34:39.374072  310801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 20:34:39.374147  310801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 20:34:39.462096  310801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 20:34:39.625431  310801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 20:34:39.899737  310801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 20:34:40.026923  310801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 20:34:40.326605  310801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 20:34:40.326736  310801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:40.487273  310801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 20:34:40.487463  310801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-689539 localhost] and IPs [192.168.39.220 127.0.0.1 ::1]
	I1205 20:34:41.025029  310801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 20:34:41.081102  310801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 20:34:41.372777  310801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 20:34:41.372851  310801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 20:34:41.470469  310801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 20:34:41.550016  310801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 20:34:41.829563  310801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 20:34:41.903888  310801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 20:34:42.075688  310801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 20:34:42.076191  310801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 20:34:42.079642  310801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 20:34:42.116791  310801 out.go:235]   - Booting up control plane ...
	I1205 20:34:42.116956  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 20:34:42.117092  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 20:34:42.117208  310801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 20:34:42.117347  310801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 20:34:42.117444  310801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 20:34:42.117492  310801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 20:34:42.242074  310801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 20:34:42.242211  310801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 20:34:42.743099  310801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.406858ms
	I1205 20:34:42.743201  310801 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 20:34:48.715396  310801 kubeadm.go:310] [api-check] The API server is healthy after 5.976028105s
	I1205 20:34:48.727254  310801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 20:34:48.744015  310801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 20:34:49.271812  310801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 20:34:49.272046  310801 kubeadm.go:310] [mark-control-plane] Marking the node ha-689539 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 20:34:49.283178  310801 kubeadm.go:310] [bootstrap-token] Using token: ynd0vv.39hctrjjdwln7xrk
	I1205 20:34:49.284635  310801 out.go:235]   - Configuring RBAC rules ...
	I1205 20:34:49.284805  310801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 20:34:49.298869  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 20:34:49.307342  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 20:34:49.311034  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 20:34:49.314220  310801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 20:34:49.318275  310801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 20:34:49.336336  310801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 20:34:49.603608  310801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 20:34:50.123229  310801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 20:34:50.123255  310801 kubeadm.go:310] 
	I1205 20:34:50.123360  310801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 20:34:50.123388  310801 kubeadm.go:310] 
	I1205 20:34:50.123496  310801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 20:34:50.123533  310801 kubeadm.go:310] 
	I1205 20:34:50.123584  310801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 20:34:50.123672  310801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 20:34:50.123755  310801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 20:34:50.123771  310801 kubeadm.go:310] 
	I1205 20:34:50.123856  310801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 20:34:50.123868  310801 kubeadm.go:310] 
	I1205 20:34:50.123942  310801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 20:34:50.123957  310801 kubeadm.go:310] 
	I1205 20:34:50.124045  310801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 20:34:50.124156  310801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 20:34:50.124256  310801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 20:34:50.124269  310801 kubeadm.go:310] 
	I1205 20:34:50.124397  310801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 20:34:50.124510  310801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 20:34:50.124522  310801 kubeadm.go:310] 
	I1205 20:34:50.124645  310801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.124778  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 20:34:50.124879  310801 kubeadm.go:310] 	--control-plane 
	I1205 20:34:50.124896  310801 kubeadm.go:310] 
	I1205 20:34:50.125023  310801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 20:34:50.125040  310801 kubeadm.go:310] 
	I1205 20:34:50.125138  310801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token ynd0vv.39hctrjjdwln7xrk \
	I1205 20:34:50.125303  310801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 20:34:50.125442  310801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 20:34:50.125462  310801 cni.go:84] Creating CNI manager for ""
	I1205 20:34:50.125470  310801 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I1205 20:34:50.127293  310801 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1205 20:34:50.128597  310801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1205 20:34:50.133712  310801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 20:34:50.133735  310801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1205 20:34:50.151910  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 20:34:50.498891  310801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 20:34:50.498983  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:50.498995  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539 minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=true
	I1205 20:34:50.513638  310801 ops.go:34] apiserver oom_adj: -16
	I1205 20:34:50.590747  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.091486  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:51.591491  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.091553  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:52.591289  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.091686  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 20:34:53.194917  310801 kubeadm.go:1113] duration metric: took 2.696013148s to wait for elevateKubeSystemPrivileges
	I1205 20:34:53.194977  310801 kubeadm.go:394] duration metric: took 14.207781964s to StartCluster
	I1205 20:34:53.195006  310801 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.195117  310801 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.198426  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:34:53.198793  310801 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.198831  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:34:53.198863  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 20:34:53.198850  310801 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 20:34:53.198946  310801 addons.go:69] Setting storage-provisioner=true in profile "ha-689539"
	I1205 20:34:53.198964  310801 addons.go:69] Setting default-storageclass=true in profile "ha-689539"
	I1205 20:34:53.198979  310801 addons.go:234] Setting addon storage-provisioner=true in "ha-689539"
	I1205 20:34:53.198988  310801 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-689539"
	I1205 20:34:53.199021  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.199090  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.199551  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199570  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.199599  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.199609  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.215764  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42471
	I1205 20:34:53.216062  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38369
	I1205 20:34:53.216436  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.216527  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.217017  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217050  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217168  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.217198  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.217403  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217563  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.217568  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.218173  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.218228  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.219954  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:34:53.220226  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 20:34:53.220737  310801 cert_rotation.go:140] Starting client certificate rotation controller
	I1205 20:34:53.220963  310801 addons.go:234] Setting addon default-storageclass=true in "ha-689539"
	I1205 20:34:53.221000  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:34:53.221268  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.221303  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.235358  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40721
	I1205 20:34:53.235938  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.236563  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.236595  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.236975  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.237206  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.237645  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41495
	I1205 20:34:53.238195  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.238727  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.238753  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.239124  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.239183  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.239643  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.239697  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:53.241617  310801 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 20:34:53.243036  310801 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.243058  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 20:34:53.243080  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.247044  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247514  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.247542  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.247718  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.248011  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.248218  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.248413  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.257997  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40255
	I1205 20:34:53.258521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:53.259183  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:53.259218  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:53.259691  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:53.259961  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:34:53.262068  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:34:53.262345  310801 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.262363  310801 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 20:34:53.262386  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:34:53.265363  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.265818  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:34:53.265848  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:34:53.266018  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:34:53.266213  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:34:53.266327  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:34:53.266435  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:34:53.311906  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 20:34:53.428778  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 20:34:53.457287  310801 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 20:34:53.655441  310801 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 20:34:53.958432  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958460  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958502  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958541  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958824  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958842  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958852  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958860  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.958920  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.958929  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.958944  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.958951  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.958957  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.959133  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959149  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959214  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.959271  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.959300  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.959388  310801 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 20:34:53.959421  310801 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 20:34:53.959540  310801 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I1205 20:34:53.959549  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.959559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.959569  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.981877  310801 round_trippers.go:574] Response Status: 200 OK in 22 milliseconds
	I1205 20:34:53.982523  310801 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I1205 20:34:53.982543  310801 round_trippers.go:469] Request Headers:
	I1205 20:34:53.982553  310801 round_trippers.go:473]     Content-Type: application/json
	I1205 20:34:53.982558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:34:53.982562  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:34:53.985387  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:34:53.985542  310801 main.go:141] libmachine: Making call to close driver server
	I1205 20:34:53.985554  310801 main.go:141] libmachine: (ha-689539) Calling .Close
	I1205 20:34:53.985883  310801 main.go:141] libmachine: Successfully made call to close driver server
	I1205 20:34:53.985918  310801 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 20:34:53.985939  310801 main.go:141] libmachine: (ha-689539) DBG | Closing plugin on server side
	I1205 20:34:53.987986  310801 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 20:34:53.989183  310801 addons.go:510] duration metric: took 790.33722ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 20:34:53.989228  310801 start.go:246] waiting for cluster config update ...
	I1205 20:34:53.989258  310801 start.go:255] writing updated cluster config ...
	I1205 20:34:53.991007  310801 out.go:201] 
	I1205 20:34:53.992546  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:34:53.992653  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.994377  310801 out.go:177] * Starting "ha-689539-m02" control-plane node in "ha-689539" cluster
	I1205 20:34:53.995700  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:34:53.995727  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:34:53.995849  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:34:53.995862  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:34:53.995934  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:34:53.996107  310801 start.go:360] acquireMachinesLock for ha-689539-m02: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:34:53.996153  310801 start.go:364] duration metric: took 23.521µs to acquireMachinesLock for "ha-689539-m02"
	I1205 20:34:53.996172  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:34:53.996237  310801 start.go:125] createHost starting for "m02" (driver="kvm2")
	I1205 20:34:53.998557  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:34:53.998670  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:34:53.998722  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:34:54.015008  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43505
	I1205 20:34:54.015521  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:34:54.016066  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:34:54.016091  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:34:54.016507  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:34:54.016709  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:34:54.016933  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:34:54.017199  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:34:54.017236  310801 client.go:168] LocalClient.Create starting
	I1205 20:34:54.017303  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:34:54.017352  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017375  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017449  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:34:54.017479  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:34:54.017495  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:34:54.017521  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:34:54.017533  310801 main.go:141] libmachine: (ha-689539-m02) Calling .PreCreateCheck
	I1205 20:34:54.017789  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:34:54.018296  310801 main.go:141] libmachine: Creating machine...
	I1205 20:34:54.018313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .Create
	I1205 20:34:54.018519  310801 main.go:141] libmachine: (ha-689539-m02) Creating KVM machine...
	I1205 20:34:54.019903  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing default KVM network
	I1205 20:34:54.020058  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found existing private KVM network mk-ha-689539
	I1205 20:34:54.020167  310801 main.go:141] libmachine: (ha-689539-m02) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.020190  310801 main.go:141] libmachine: (ha-689539-m02) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:34:54.020273  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.020159  311180 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.020403  310801 main.go:141] libmachine: (ha-689539-m02) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:34:54.317847  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.317662  311180 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa...
	I1205 20:34:54.529086  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.528946  311180 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk...
	I1205 20:34:54.529124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing magic tar header
	I1205 20:34:54.529140  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Writing SSH key tar header
	I1205 20:34:54.529158  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:54.529070  311180 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 ...
	I1205 20:34:54.529265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02
	I1205 20:34:54.529295  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02 (perms=drwx------)
	I1205 20:34:54.529308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:34:54.529327  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:34:54.529337  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:34:54.529349  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:34:54.529360  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:34:54.529372  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:34:54.529383  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Checking permissions on dir: /home
	I1205 20:34:54.529398  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:34:54.529416  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:34:54.529429  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:34:54.529443  310801 main.go:141] libmachine: (ha-689539-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:34:54.529454  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Skipping /home - not owner
	I1205 20:34:54.529461  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:54.530562  310801 main.go:141] libmachine: (ha-689539-m02) define libvirt domain using xml: 
	I1205 20:34:54.530603  310801 main.go:141] libmachine: (ha-689539-m02) <domain type='kvm'>
	I1205 20:34:54.530622  310801 main.go:141] libmachine: (ha-689539-m02)   <name>ha-689539-m02</name>
	I1205 20:34:54.530636  310801 main.go:141] libmachine: (ha-689539-m02)   <memory unit='MiB'>2200</memory>
	I1205 20:34:54.530645  310801 main.go:141] libmachine: (ha-689539-m02)   <vcpu>2</vcpu>
	I1205 20:34:54.530652  310801 main.go:141] libmachine: (ha-689539-m02)   <features>
	I1205 20:34:54.530662  310801 main.go:141] libmachine: (ha-689539-m02)     <acpi/>
	I1205 20:34:54.530667  310801 main.go:141] libmachine: (ha-689539-m02)     <apic/>
	I1205 20:34:54.530672  310801 main.go:141] libmachine: (ha-689539-m02)     <pae/>
	I1205 20:34:54.530676  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.530682  310801 main.go:141] libmachine: (ha-689539-m02)   </features>
	I1205 20:34:54.530687  310801 main.go:141] libmachine: (ha-689539-m02)   <cpu mode='host-passthrough'>
	I1205 20:34:54.530691  310801 main.go:141] libmachine: (ha-689539-m02)   
	I1205 20:34:54.530700  310801 main.go:141] libmachine: (ha-689539-m02)   </cpu>
	I1205 20:34:54.530705  310801 main.go:141] libmachine: (ha-689539-m02)   <os>
	I1205 20:34:54.530714  310801 main.go:141] libmachine: (ha-689539-m02)     <type>hvm</type>
	I1205 20:34:54.530720  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='cdrom'/>
	I1205 20:34:54.530727  310801 main.go:141] libmachine: (ha-689539-m02)     <boot dev='hd'/>
	I1205 20:34:54.530733  310801 main.go:141] libmachine: (ha-689539-m02)     <bootmenu enable='no'/>
	I1205 20:34:54.530737  310801 main.go:141] libmachine: (ha-689539-m02)   </os>
	I1205 20:34:54.530742  310801 main.go:141] libmachine: (ha-689539-m02)   <devices>
	I1205 20:34:54.530747  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='cdrom'>
	I1205 20:34:54.530762  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/boot2docker.iso'/>
	I1205 20:34:54.530777  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hdc' bus='scsi'/>
	I1205 20:34:54.530792  310801 main.go:141] libmachine: (ha-689539-m02)       <readonly/>
	I1205 20:34:54.530801  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530835  310801 main.go:141] libmachine: (ha-689539-m02)     <disk type='file' device='disk'>
	I1205 20:34:54.530866  310801 main.go:141] libmachine: (ha-689539-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:34:54.530888  310801 main.go:141] libmachine: (ha-689539-m02)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/ha-689539-m02.rawdisk'/>
	I1205 20:34:54.530900  310801 main.go:141] libmachine: (ha-689539-m02)       <target dev='hda' bus='virtio'/>
	I1205 20:34:54.530910  310801 main.go:141] libmachine: (ha-689539-m02)     </disk>
	I1205 20:34:54.530920  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.530930  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='mk-ha-689539'/>
	I1205 20:34:54.530940  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.530948  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.530963  310801 main.go:141] libmachine: (ha-689539-m02)     <interface type='network'>
	I1205 20:34:54.531000  310801 main.go:141] libmachine: (ha-689539-m02)       <source network='default'/>
	I1205 20:34:54.531021  310801 main.go:141] libmachine: (ha-689539-m02)       <model type='virtio'/>
	I1205 20:34:54.531046  310801 main.go:141] libmachine: (ha-689539-m02)     </interface>
	I1205 20:34:54.531060  310801 main.go:141] libmachine: (ha-689539-m02)     <serial type='pty'>
	I1205 20:34:54.531070  310801 main.go:141] libmachine: (ha-689539-m02)       <target port='0'/>
	I1205 20:34:54.531080  310801 main.go:141] libmachine: (ha-689539-m02)     </serial>
	I1205 20:34:54.531092  310801 main.go:141] libmachine: (ha-689539-m02)     <console type='pty'>
	I1205 20:34:54.531101  310801 main.go:141] libmachine: (ha-689539-m02)       <target type='serial' port='0'/>
	I1205 20:34:54.531113  310801 main.go:141] libmachine: (ha-689539-m02)     </console>
	I1205 20:34:54.531124  310801 main.go:141] libmachine: (ha-689539-m02)     <rng model='virtio'>
	I1205 20:34:54.531149  310801 main.go:141] libmachine: (ha-689539-m02)       <backend model='random'>/dev/random</backend>
	I1205 20:34:54.531171  310801 main.go:141] libmachine: (ha-689539-m02)     </rng>
	I1205 20:34:54.531193  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531210  310801 main.go:141] libmachine: (ha-689539-m02)     
	I1205 20:34:54.531219  310801 main.go:141] libmachine: (ha-689539-m02)   </devices>
	I1205 20:34:54.531228  310801 main.go:141] libmachine: (ha-689539-m02) </domain>
	I1205 20:34:54.531253  310801 main.go:141] libmachine: (ha-689539-m02) 
	I1205 20:34:54.538318  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:db:6c:41 in network default
	I1205 20:34:54.538874  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring networks are active...
	I1205 20:34:54.538905  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:54.539900  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network default is active
	I1205 20:34:54.540256  310801 main.go:141] libmachine: (ha-689539-m02) Ensuring network mk-ha-689539 is active
	I1205 20:34:54.540685  310801 main.go:141] libmachine: (ha-689539-m02) Getting domain xml...
	I1205 20:34:54.541702  310801 main.go:141] libmachine: (ha-689539-m02) Creating domain...
	I1205 20:34:55.795769  310801 main.go:141] libmachine: (ha-689539-m02) Waiting to get IP...
	I1205 20:34:55.796704  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:55.797107  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:55.797137  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:55.797080  311180 retry.go:31] will retry after 248.666925ms: waiting for machine to come up
	I1205 20:34:56.047775  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.048308  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.048345  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.048228  311180 retry.go:31] will retry after 275.164049ms: waiting for machine to come up
	I1205 20:34:56.324858  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.325265  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.325293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.325230  311180 retry.go:31] will retry after 471.642082ms: waiting for machine to come up
	I1205 20:34:56.798901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:56.799411  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:56.799445  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:56.799337  311180 retry.go:31] will retry after 372.986986ms: waiting for machine to come up
	I1205 20:34:57.173842  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.174284  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.174315  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.174243  311180 retry.go:31] will retry after 491.328215ms: waiting for machine to come up
	I1205 20:34:57.666917  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:57.667363  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:57.667388  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:57.667340  311180 retry.go:31] will retry after 701.698041ms: waiting for machine to come up
	I1205 20:34:58.370293  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:58.370782  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:58.370813  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:58.370725  311180 retry.go:31] will retry after 750.048133ms: waiting for machine to come up
	I1205 20:34:59.121998  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:34:59.122452  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:34:59.122482  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:34:59.122416  311180 retry.go:31] will retry after 1.373917427s: waiting for machine to come up
	I1205 20:35:00.498001  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:00.498527  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:00.498564  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:00.498461  311180 retry.go:31] will retry after 1.273603268s: waiting for machine to come up
	I1205 20:35:01.773536  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:01.774024  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:01.774055  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:01.773976  311180 retry.go:31] will retry after 1.863052543s: waiting for machine to come up
	I1205 20:35:03.640228  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:03.640744  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:03.640780  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:03.640681  311180 retry.go:31] will retry after 2.126872214s: waiting for machine to come up
	I1205 20:35:05.768939  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:05.769465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:05.769495  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:05.769419  311180 retry.go:31] will retry after 2.492593838s: waiting for machine to come up
	I1205 20:35:08.265013  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:08.265518  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:08.265557  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:08.265445  311180 retry.go:31] will retry after 4.136586499s: waiting for machine to come up
	I1205 20:35:12.405674  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:12.406165  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find current IP address of domain ha-689539-m02 in network mk-ha-689539
	I1205 20:35:12.406195  310801 main.go:141] libmachine: (ha-689539-m02) DBG | I1205 20:35:12.406099  311180 retry.go:31] will retry after 4.175170751s: waiting for machine to come up
	I1205 20:35:16.583008  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583448  310801 main.go:141] libmachine: (ha-689539-m02) Found IP for machine: 192.168.39.224
	I1205 20:35:16.583483  310801 main.go:141] libmachine: (ha-689539-m02) Reserving static IP address...
	I1205 20:35:16.583508  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has current primary IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.583773  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "ha-689539-m02", mac: "52:54:00:01:ca:45", ip: "192.168.39.224"} in network mk-ha-689539
	I1205 20:35:16.666774  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:16.666819  310801 main.go:141] libmachine: (ha-689539-m02) Reserved static IP address: 192.168.39.224
	I1205 20:35:16.666833  310801 main.go:141] libmachine: (ha-689539-m02) Waiting for SSH to be available...
	I1205 20:35:16.669680  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:16.670217  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539
	I1205 20:35:16.670248  310801 main.go:141] libmachine: (ha-689539-m02) DBG | unable to find defined IP address of network mk-ha-689539 interface with MAC address 52:54:00:01:ca:45
	I1205 20:35:16.670412  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:16.670440  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:16.670473  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:16.670490  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:16.670506  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:16.675197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: exit status 255: 
	I1205 20:35:16.675236  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 20:35:16.675246  310801 main.go:141] libmachine: (ha-689539-m02) DBG | command : exit 0
	I1205 20:35:16.675253  310801 main.go:141] libmachine: (ha-689539-m02) DBG | err     : exit status 255
	I1205 20:35:16.675269  310801 main.go:141] libmachine: (ha-689539-m02) DBG | output  : 
	I1205 20:35:19.675465  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Getting to WaitForSSH function...
	I1205 20:35:19.678124  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678615  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.678646  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.678752  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH client type: external
	I1205 20:35:19.678781  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa (-rw-------)
	I1205 20:35:19.678817  310801 main.go:141] libmachine: (ha-689539-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.224 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:35:19.678840  310801 main.go:141] libmachine: (ha-689539-m02) DBG | About to run SSH command:
	I1205 20:35:19.678857  310801 main.go:141] libmachine: (ha-689539-m02) DBG | exit 0
	I1205 20:35:19.805836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | SSH cmd err, output: <nil>: 
	I1205 20:35:19.806152  310801 main.go:141] libmachine: (ha-689539-m02) KVM machine creation complete!
	I1205 20:35:19.806464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:19.807084  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807313  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:19.807474  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:35:19.807492  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetState
	I1205 20:35:19.808787  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:35:19.808804  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:35:19.808811  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:35:19.808818  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.811344  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811714  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.811743  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.811928  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.812132  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.812422  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.812622  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.812860  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.812871  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:35:19.921262  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:19.921299  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:35:19.921312  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:19.924600  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925051  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:19.925075  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:19.925275  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:19.925497  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925651  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:19.925794  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:19.925996  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:19.926221  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:19.926235  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:35:20.039067  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:35:20.039180  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:35:20.039192  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:35:20.039205  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039552  310801 buildroot.go:166] provisioning hostname "ha-689539-m02"
	I1205 20:35:20.039589  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.039855  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.043233  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.043789  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.043820  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.044027  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.044236  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044433  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.044659  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.044843  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.045030  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.045042  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m02 && echo "ha-689539-m02" | sudo tee /etc/hostname
	I1205 20:35:20.173519  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m02
	
	I1205 20:35:20.173562  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.176643  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.176967  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.176994  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.177264  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.177464  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177721  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.177868  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.178085  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.178312  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.178329  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:35:20.299145  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:35:20.299194  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:35:20.299221  310801 buildroot.go:174] setting up certificates
	I1205 20:35:20.299251  310801 provision.go:84] configureAuth start
	I1205 20:35:20.299278  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetMachineName
	I1205 20:35:20.299618  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.302873  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303197  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.303234  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.303352  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.305836  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306274  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.306298  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.306450  310801 provision.go:143] copyHostCerts
	I1205 20:35:20.306489  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306536  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:35:20.306547  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:35:20.306613  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:35:20.306694  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306712  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:35:20.306719  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:35:20.306743  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:35:20.306790  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306807  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:35:20.306813  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:35:20.306832  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:35:20.306880  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m02 san=[127.0.0.1 192.168.39.224 ha-689539-m02 localhost minikube]
	I1205 20:35:20.462180  310801 provision.go:177] copyRemoteCerts
	I1205 20:35:20.462244  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:35:20.462273  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.465164  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465498  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.465526  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.465765  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.465979  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.466125  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.466256  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.552142  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:35:20.552248  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:35:20.577611  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:35:20.577693  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:35:20.602829  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:35:20.602927  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:35:20.629296  310801 provision.go:87] duration metric: took 330.013316ms to configureAuth
	I1205 20:35:20.629334  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:35:20.629554  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:20.629672  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.632608  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633010  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.633046  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.633219  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.633418  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633617  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.633785  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.634021  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:20.634203  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:20.634221  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:35:20.861660  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:35:20.861695  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:35:20.861706  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetURL
	I1205 20:35:20.863182  310801 main.go:141] libmachine: (ha-689539-m02) DBG | Using libvirt version 6000000
	I1205 20:35:20.865580  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866002  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.866022  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.866305  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:35:20.866329  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:35:20.866337  310801 client.go:171] duration metric: took 26.849092016s to LocalClient.Create
	I1205 20:35:20.866366  310801 start.go:167] duration metric: took 26.849169654s to libmachine.API.Create "ha-689539"
	I1205 20:35:20.866385  310801 start.go:293] postStartSetup for "ha-689539-m02" (driver="kvm2")
	I1205 20:35:20.866396  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:35:20.866415  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:20.866737  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:35:20.866782  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:20.869117  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869511  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.869539  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.869712  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:20.869922  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:20.870094  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:20.870213  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:20.956165  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:35:20.960554  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:35:20.960593  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:35:20.960663  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:35:20.960745  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:35:20.960756  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:35:20.960845  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:35:20.970171  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:20.993469  310801 start.go:296] duration metric: took 127.065366ms for postStartSetup
	I1205 20:35:20.993548  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetConfigRaw
	I1205 20:35:20.994261  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:20.996956  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997403  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:20.997431  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:20.997694  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:35:20.997894  310801 start.go:128] duration metric: took 27.001645944s to createHost
	I1205 20:35:20.997947  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.000356  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000768  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.000793  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.000932  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.001164  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001372  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.001567  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.001800  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:35:21.002023  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.224 22 <nil> <nil>}
	I1205 20:35:21.002035  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:35:21.114783  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430921.091468988
	
	I1205 20:35:21.114813  310801 fix.go:216] guest clock: 1733430921.091468988
	I1205 20:35:21.114823  310801 fix.go:229] Guest: 2024-12-05 20:35:21.091468988 +0000 UTC Remote: 2024-12-05 20:35:20.997930274 +0000 UTC m=+72.965807310 (delta=93.538714ms)
	I1205 20:35:21.114853  310801 fix.go:200] guest clock delta is within tolerance: 93.538714ms
	I1205 20:35:21.114861  310801 start.go:83] releasing machines lock for "ha-689539-m02", held for 27.118697006s
	I1205 20:35:21.114886  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.115206  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:21.118066  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.118466  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.118504  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.121045  310801 out.go:177] * Found network options:
	I1205 20:35:21.122608  310801 out.go:177]   - NO_PROXY=192.168.39.220
	W1205 20:35:21.124023  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.124097  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.124832  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125105  310801 main.go:141] libmachine: (ha-689539-m02) Calling .DriverName
	I1205 20:35:21.125251  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:35:21.125326  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	W1205 20:35:21.125332  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:35:21.125435  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:35:21.125468  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHHostname
	I1205 20:35:21.128474  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128563  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.128871  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.128901  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129000  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:21.129022  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129073  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:21.129233  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHPort
	I1205 20:35:21.129232  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129435  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129437  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHKeyPath
	I1205 20:35:21.129634  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetSSHUsername
	I1205 20:35:21.129634  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.129803  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.224 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m02/id_rsa Username:docker}
	I1205 20:35:21.365680  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:35:21.371668  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:35:21.371782  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:35:21.388230  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:35:21.388261  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:35:21.388348  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:35:21.404768  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:35:21.419149  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:35:21.419231  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:35:21.433110  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:35:21.447375  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:35:21.563926  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:35:21.729278  310801 docker.go:233] disabling docker service ...
	I1205 20:35:21.729378  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:35:21.744065  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:35:21.757106  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:35:21.878877  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:35:21.983688  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:35:21.997947  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:35:22.016485  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:35:22.016555  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.027185  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:35:22.027270  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.037892  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.048316  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.059131  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:35:22.075255  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.086233  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.103682  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:35:22.114441  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:35:22.124360  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:35:22.124442  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:35:22.138043  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:35:22.147996  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:22.253398  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:35:22.348717  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:35:22.348790  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:35:22.353405  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:35:22.353468  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:35:22.357215  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:35:22.393844  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:35:22.393959  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.422018  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:35:22.452780  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:35:22.454193  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:35:22.455398  310801 main.go:141] libmachine: (ha-689539-m02) Calling .GetIP
	I1205 20:35:22.458243  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458611  310801 main.go:141] libmachine: (ha-689539-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ca:45", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:35:08 +0000 UTC Type:0 Mac:52:54:00:01:ca:45 Iaid: IPaddr:192.168.39.224 Prefix:24 Hostname:ha-689539-m02 Clientid:01:52:54:00:01:ca:45}
	I1205 20:35:22.458649  310801 main.go:141] libmachine: (ha-689539-m02) DBG | domain ha-689539-m02 has defined IP address 192.168.39.224 and MAC address 52:54:00:01:ca:45 in network mk-ha-689539
	I1205 20:35:22.458851  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:35:22.463124  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:22.475841  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:35:22.476087  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:22.476420  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.476470  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.492198  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39993
	I1205 20:35:22.492793  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.493388  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.493418  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.493835  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.494104  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:35:22.495827  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:22.496123  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:22.496160  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:22.512684  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35311
	I1205 20:35:22.513289  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:22.513852  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:22.513877  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:22.514257  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:22.514474  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:22.514658  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.224
	I1205 20:35:22.514672  310801 certs.go:194] generating shared ca certs ...
	I1205 20:35:22.514692  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.514826  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:35:22.514868  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:35:22.514875  310801 certs.go:256] generating profile certs ...
	I1205 20:35:22.514942  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:35:22.514966  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736
	I1205 20:35:22.514982  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.254]
	I1205 20:35:22.799808  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 ...
	I1205 20:35:22.799844  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736: {Name:mk805c9f0c218cfc1a14cc95ce5560d63a919c31 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800063  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 ...
	I1205 20:35:22.800084  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736: {Name:mk878dc23fa761ab4aecc158abe1405fbc550219 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:35:22.800189  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:35:22.800337  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.0bcaa736 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:35:22.800471  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:35:22.800490  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:35:22.800508  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:35:22.800524  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:35:22.800539  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:35:22.800554  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:35:22.800569  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:35:22.800578  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:35:22.800588  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:35:22.800649  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:35:22.800680  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:35:22.800690  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:35:22.800714  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:35:22.800740  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:35:22.800782  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:35:22.800829  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:35:22.800856  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:22.800870  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:35:22.800883  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:35:22.800924  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:22.803915  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804323  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:22.804357  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:22.804510  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:22.804779  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:22.804968  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:22.805127  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:22.874336  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:35:22.878799  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:35:22.889481  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:35:22.893603  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:35:22.907201  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:35:22.911129  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:35:22.921562  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:35:22.925468  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:35:22.935462  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:35:22.939312  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:35:22.949250  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:35:22.953120  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:35:22.964047  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:35:22.988860  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:35:23.013850  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:35:23.037874  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:35:23.062975  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 20:35:23.087802  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:35:23.112226  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:35:23.139642  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:35:23.168141  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:35:23.193470  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:35:23.218935  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:35:23.243452  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:35:23.261775  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:35:23.279011  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:35:23.296521  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:35:23.313399  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:35:23.330608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:35:23.349181  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:35:23.366287  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:35:23.372023  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:35:23.383498  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.387933  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.388026  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:35:23.393863  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:35:23.405145  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:35:23.416665  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421806  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.421882  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:35:23.427892  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:35:23.439291  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:35:23.450645  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455301  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.455397  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:35:23.461088  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:35:23.473062  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:35:23.477238  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:35:23.477315  310801 kubeadm.go:934] updating node {m02 192.168.39.224 8443 v1.31.2 crio true true} ...
	I1205 20:35:23.477412  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:35:23.477446  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:35:23.477488  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:35:23.494130  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:35:23.494206  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:35:23.494265  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.504559  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:35:23.504639  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:35:23.515268  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet
	I1205 20:35:23.515267  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:35:23.515267  310801 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm
	I1205 20:35:23.515420  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.515485  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:35:23.520360  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:35:23.520397  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:35:24.329721  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.329837  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:35:24.335194  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:35:24.335241  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:35:24.693728  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:35:24.707996  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.708127  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:35:24.712643  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:35:24.712685  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:35:25.030158  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:35:25.039864  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:35:25.056953  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:35:25.074038  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:35:25.090341  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:35:25.094291  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:35:25.106549  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:25.251421  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:25.281544  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:35:25.281958  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:35:25.282025  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:35:25.298815  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43001
	I1205 20:35:25.299446  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:35:25.299916  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:35:25.299940  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:35:25.300264  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:35:25.300471  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:35:25.300647  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:35:25.300755  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:35:25.300777  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:35:25.303962  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304378  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:35:25.304416  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:35:25.304612  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:35:25.304845  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:35:25.305034  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:35:25.305189  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:35:25.467206  310801 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:25.467286  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443"
	I1205 20:35:47.115820  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token u7curd.swqoqc05eru6gfpp --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m02 --control-plane --apiserver-advertise-address=192.168.39.224 --apiserver-bind-port=8443": (21.648499033s)
	I1205 20:35:47.115867  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:35:47.674102  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m02 minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:35:47.783659  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:35:47.899441  310801 start.go:319] duration metric: took 22.598789448s to joinCluster
	I1205 20:35:47.899529  310801 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:35:47.899871  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:35:47.901544  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:35:47.903164  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:35:48.171147  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:35:48.196654  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:35:48.197028  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:35:48.197120  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:35:48.197520  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:35:48.197656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.197669  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.197681  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.197693  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.214799  310801 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I1205 20:35:48.697777  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:48.697812  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:48.697824  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:48.697833  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:48.703691  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.198191  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.198217  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.198225  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.198229  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.204218  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:49.698048  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:49.698079  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:49.698090  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:49.698096  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:49.705663  310801 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I1205 20:35:50.198629  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.198656  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.198669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.198675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.202111  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:50.202581  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:50.698434  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:50.698457  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:50.698465  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:50.698469  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:50.702335  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.197943  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.197971  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.197981  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.197985  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.201567  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:51.698634  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:51.698668  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:51.698680  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:51.698687  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:51.702470  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.198285  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.198318  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.198331  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.198338  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.202116  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:52.202820  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:52.697909  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:52.697940  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:52.697953  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:52.697959  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:52.700998  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.198023  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.198047  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.198056  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.198059  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.201259  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:53.698438  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:53.698462  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:53.698478  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:53.698482  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:53.701883  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.198346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.198373  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.198381  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.198386  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.202207  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:54.203013  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:54.698384  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:54.698407  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:54.698415  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:54.698422  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:54.703135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:35:55.198075  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.198102  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.198111  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.198116  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.275835  310801 round_trippers.go:574] Response Status: 200 OK in 77 milliseconds
	I1205 20:35:55.698292  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:55.698327  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:55.698343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:55.698347  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:55.701831  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.197819  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.197847  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.197856  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.197861  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.201202  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.698240  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:56.698288  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:56.698299  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:56.698304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:56.701586  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:56.702160  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:57.198590  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.198622  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.198633  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.198638  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.201959  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:57.698128  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:57.698159  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:57.698170  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:57.698175  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:57.703388  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:35:58.198316  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.198343  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.198352  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.198357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.201617  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.698669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:58.698694  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:58.698706  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:58.698710  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:58.702292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:35:58.702971  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:35:59.198697  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.198726  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.198739  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.198747  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.205545  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:35:59.698504  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:35:59.698536  310801 round_trippers.go:469] Request Headers:
	I1205 20:35:59.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:35:59.698560  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:35:59.702266  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.198245  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.198270  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.198279  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.198283  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.201787  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:00.698510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:00.698544  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:00.698553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:00.698563  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:00.701802  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.197953  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.197983  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.197994  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.197999  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.201035  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:01.201711  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:01.698167  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:01.698198  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:01.698210  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:01.698215  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:01.701264  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.198110  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.198141  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.198152  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.198157  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.201468  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:02.698626  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:02.698659  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:02.698669  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:02.698675  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:02.701881  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.198737  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.198763  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.198774  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.198779  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.202428  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:03.202953  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:03.698736  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:03.698768  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:03.698780  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:03.698788  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:03.702162  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.197743  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.197773  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.197784  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.197791  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.201284  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:04.698126  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:04.698155  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:04.698164  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:04.698168  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:04.701888  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.198088  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.198121  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.198131  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.198138  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.201797  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.698476  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:05.698506  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:05.698515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:05.698520  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:05.701875  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:05.702580  310801 node_ready.go:53] node "ha-689539-m02" has status "Ready":"False"
	I1205 20:36:06.198021  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.198061  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.198069  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.198074  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.201540  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.202101  310801 node_ready.go:49] node "ha-689539-m02" has status "Ready":"True"
	I1205 20:36:06.202126  310801 node_ready.go:38] duration metric: took 18.004581739s for node "ha-689539-m02" to be "Ready" ...
	I1205 20:36:06.202140  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:06.202253  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:06.202268  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.202278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.202285  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.206754  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.212677  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.212799  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:36:06.212813  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.212822  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.212827  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.215643  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.216276  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.216293  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.216301  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.216304  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.218813  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.219400  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.219422  310801 pod_ready.go:82] duration metric: took 6.710961ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219433  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.219519  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:36:06.219530  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.219537  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.219544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.221986  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.222730  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.222744  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.222752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.222757  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.225041  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.225536  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.225559  310801 pod_ready.go:82] duration metric: took 6.118464ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225582  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.225656  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:36:06.225668  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.225684  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.225696  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.228280  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.228948  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.228962  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.228970  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.228974  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.231708  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.232206  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.232225  310801 pod_ready.go:82] duration metric: took 6.631337ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232234  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.232328  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:36:06.232338  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.232347  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.232357  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.234717  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.235313  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.235328  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.235336  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.235340  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.237446  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:36:06.237958  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.237979  310801 pod_ready.go:82] duration metric: took 5.738833ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.237997  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.398468  310801 request.go:632] Waited for 160.38501ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398582  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:36:06.398592  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.398601  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.398605  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.402334  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.598805  310801 request.go:632] Waited for 195.477134ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598897  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:06.598903  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.598911  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.598914  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.602945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:06.603481  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:06.603505  310801 pod_ready.go:82] duration metric: took 365.497043ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.603516  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:06.798685  310801 request.go:632] Waited for 195.084248ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798771  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:36:06.798776  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.798786  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.798792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:06.802375  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:06.998825  310801 request.go:632] Waited for 195.407022ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998895  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:06.998900  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:06.998908  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:06.998913  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.003073  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.003620  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.003641  310801 pod_ready.go:82] duration metric: took 400.118288ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.003652  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.198723  310801 request.go:632] Waited for 194.973944ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198815  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:36:07.198822  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.198834  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.198844  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.202792  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.398908  310801 request.go:632] Waited for 195.413458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.398993  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:07.399006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.399019  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.399029  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.403088  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.403800  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.403838  310801 pod_ready.go:82] duration metric: took 400.178189ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.403856  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.598771  310801 request.go:632] Waited for 194.816012ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598840  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:36:07.598845  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.598862  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.598869  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.602566  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:07.798831  310801 request.go:632] Waited for 195.438007ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798985  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:07.798998  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.799015  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.799023  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:07.803171  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:07.803823  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:07.803849  310801 pod_ready.go:82] duration metric: took 399.978899ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.803864  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:07.998893  310801 request.go:632] Waited for 194.90975ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.998995  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:36:07.999006  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:07.999033  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:07.999050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.003019  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.198483  310801 request.go:632] Waited for 194.725493ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198570  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.198580  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.198588  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.198592  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.202279  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.202805  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.202824  310801 pod_ready.go:82] duration metric: took 398.949898ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.202837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.399003  310801 request.go:632] Waited for 196.061371ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399102  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:36:08.399110  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.399126  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.399137  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.404511  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:36:08.598657  310801 request.go:632] Waited for 193.397123ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598817  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:08.598829  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.598837  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.598850  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.602654  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.603461  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:08.603483  310801 pod_ready.go:82] duration metric: took 400.640164ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.603494  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:08.798579  310801 request.go:632] Waited for 194.963606ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798669  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:36:08.798680  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.798692  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.798704  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:08.802678  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:08.998854  310801 request.go:632] Waited for 195.447294ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998947  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:36:08.998954  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:08.998964  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:08.998970  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.003138  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.003792  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.003821  310801 pod_ready.go:82] duration metric: took 400.319353ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.003837  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.198016  310801 request.go:632] Waited for 194.088845ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198132  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:36:09.198145  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.198158  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.198165  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.201958  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.398942  310801 request.go:632] Waited for 196.371567ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:36:09.399033  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.399044  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.399050  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.402750  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:09.403404  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:36:09.403436  310801 pod_ready.go:82] duration metric: took 399.590034ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:36:09.403451  310801 pod_ready.go:39] duration metric: took 3.201294497s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:36:09.403471  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:36:09.403551  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:36:09.418357  310801 api_server.go:72] duration metric: took 21.51878718s to wait for apiserver process to appear ...
	I1205 20:36:09.418390  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:36:09.418420  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:36:09.425381  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:36:09.425471  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:36:09.425479  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.425488  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.425494  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.426343  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:36:09.426447  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:36:09.426464  310801 api_server.go:131] duration metric: took 8.067774ms to wait for apiserver health ...
	I1205 20:36:09.426481  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:36:09.598951  310801 request.go:632] Waited for 172.364571ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599024  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.599030  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.599038  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.599042  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.603442  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.609057  310801 system_pods.go:59] 17 kube-system pods found
	I1205 20:36:09.609099  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:09.609107  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:09.609113  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:09.609121  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:09.609126  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:09.609130  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:09.609136  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:09.609142  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:09.609149  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:09.609159  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:09.609165  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:09.609174  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:09.609180  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:09.609186  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:09.609192  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:09.609200  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:09.609207  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:09.609218  310801 system_pods.go:74] duration metric: took 182.726007ms to wait for pod list to return data ...
	I1205 20:36:09.609232  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:36:09.798716  310801 request.go:632] Waited for 189.385773ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798784  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:36:09.798789  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.798797  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.798800  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:09.803434  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:36:09.803720  310801 default_sa.go:45] found service account: "default"
	I1205 20:36:09.803742  310801 default_sa.go:55] duration metric: took 194.50158ms for default service account to be created ...
	I1205 20:36:09.803755  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:36:09.998902  310801 request.go:632] Waited for 195.036574ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998984  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:36:09.998992  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:09.999004  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:09.999012  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.005341  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:36:10.009685  310801 system_pods.go:86] 17 kube-system pods found
	I1205 20:36:10.009721  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:36:10.009733  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:36:10.009739  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:36:10.009745  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:36:10.009751  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:36:10.009756  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:36:10.009760  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:36:10.009770  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:36:10.009774  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:36:10.009778  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:36:10.009782  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:36:10.009786  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:36:10.009789  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:36:10.009794  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:36:10.009797  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:36:10.009802  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:36:10.009805  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:36:10.009814  310801 system_pods.go:126] duration metric: took 206.05156ms to wait for k8s-apps to be running ...
	I1205 20:36:10.009825  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:36:10.009874  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:10.025329  310801 system_svc.go:56] duration metric: took 15.491147ms WaitForService to wait for kubelet
	I1205 20:36:10.025382  310801 kubeadm.go:582] duration metric: took 22.125819174s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:36:10.025410  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:36:10.199031  310801 request.go:632] Waited for 173.477614ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199134  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:36:10.199143  310801 round_trippers.go:469] Request Headers:
	I1205 20:36:10.199154  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:36:10.199159  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:36:10.202963  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:36:10.203807  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203836  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203848  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:36:10.203851  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:36:10.203855  310801 node_conditions.go:105] duration metric: took 178.44033ms to run NodePressure ...
	I1205 20:36:10.203870  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:36:10.203895  310801 start.go:255] writing updated cluster config ...
	I1205 20:36:10.205987  310801 out.go:201] 
	I1205 20:36:10.207492  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:10.207614  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.209270  310801 out.go:177] * Starting "ha-689539-m03" control-plane node in "ha-689539" cluster
	I1205 20:36:10.210621  310801 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:36:10.210654  310801 cache.go:56] Caching tarball of preloaded images
	I1205 20:36:10.210766  310801 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:36:10.210778  310801 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:36:10.210880  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:10.211060  310801 start.go:360] acquireMachinesLock for ha-689539-m03: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:36:10.211107  310801 start.go:364] duration metric: took 26.599µs to acquireMachinesLock for "ha-689539-m03"
	I1205 20:36:10.211127  310801 start.go:93] Provisioning new machine with config: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:fals
e ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bi
naryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:10.211224  310801 start.go:125] createHost starting for "m03" (driver="kvm2")
	I1205 20:36:10.213644  310801 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 20:36:10.213846  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:10.213895  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:10.230607  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 20:36:10.231136  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:10.231708  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:10.231730  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:10.232163  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:10.232486  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:10.232681  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:10.232898  310801 start.go:159] libmachine.API.Create for "ha-689539" (driver="kvm2")
	I1205 20:36:10.232939  310801 client.go:168] LocalClient.Create starting
	I1205 20:36:10.232979  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 20:36:10.233029  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233052  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233142  310801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 20:36:10.233176  310801 main.go:141] libmachine: Decoding PEM data...
	I1205 20:36:10.233191  310801 main.go:141] libmachine: Parsing certificate...
	I1205 20:36:10.233315  310801 main.go:141] libmachine: Running pre-create checks...
	I1205 20:36:10.233332  310801 main.go:141] libmachine: (ha-689539-m03) Calling .PreCreateCheck
	I1205 20:36:10.233549  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:10.234493  310801 main.go:141] libmachine: Creating machine...
	I1205 20:36:10.234513  310801 main.go:141] libmachine: (ha-689539-m03) Calling .Create
	I1205 20:36:10.234674  310801 main.go:141] libmachine: (ha-689539-m03) Creating KVM machine...
	I1205 20:36:10.236332  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing default KVM network
	I1205 20:36:10.236451  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found existing private KVM network mk-ha-689539
	I1205 20:36:10.236656  310801 main.go:141] libmachine: (ha-689539-m03) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.236685  310801 main.go:141] libmachine: (ha-689539-m03) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:36:10.236729  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.236616  311584 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.236870  310801 main.go:141] libmachine: (ha-689539-m03) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 20:36:10.551771  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.551634  311584 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa...
	I1205 20:36:10.671521  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671352  311584 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk...
	I1205 20:36:10.671562  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing magic tar header
	I1205 20:36:10.671575  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Writing SSH key tar header
	I1205 20:36:10.671584  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:10.671500  311584 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 ...
	I1205 20:36:10.671596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03
	I1205 20:36:10.671680  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03 (perms=drwx------)
	I1205 20:36:10.671707  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 20:36:10.671718  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 20:36:10.671731  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:36:10.671740  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 20:36:10.671749  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 20:36:10.671759  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home/jenkins
	I1205 20:36:10.671770  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Checking permissions on dir: /home
	I1205 20:36:10.671781  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Skipping /home - not owner
	I1205 20:36:10.671795  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 20:36:10.671811  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 20:36:10.671827  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 20:36:10.671837  310801 main.go:141] libmachine: (ha-689539-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 20:36:10.671843  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:10.672929  310801 main.go:141] libmachine: (ha-689539-m03) define libvirt domain using xml: 
	I1205 20:36:10.672953  310801 main.go:141] libmachine: (ha-689539-m03) <domain type='kvm'>
	I1205 20:36:10.672970  310801 main.go:141] libmachine: (ha-689539-m03)   <name>ha-689539-m03</name>
	I1205 20:36:10.673070  310801 main.go:141] libmachine: (ha-689539-m03)   <memory unit='MiB'>2200</memory>
	I1205 20:36:10.673100  310801 main.go:141] libmachine: (ha-689539-m03)   <vcpu>2</vcpu>
	I1205 20:36:10.673109  310801 main.go:141] libmachine: (ha-689539-m03)   <features>
	I1205 20:36:10.673135  310801 main.go:141] libmachine: (ha-689539-m03)     <acpi/>
	I1205 20:36:10.673151  310801 main.go:141] libmachine: (ha-689539-m03)     <apic/>
	I1205 20:36:10.673157  310801 main.go:141] libmachine: (ha-689539-m03)     <pae/>
	I1205 20:36:10.673164  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673174  310801 main.go:141] libmachine: (ha-689539-m03)   </features>
	I1205 20:36:10.673181  310801 main.go:141] libmachine: (ha-689539-m03)   <cpu mode='host-passthrough'>
	I1205 20:36:10.673187  310801 main.go:141] libmachine: (ha-689539-m03)   
	I1205 20:36:10.673192  310801 main.go:141] libmachine: (ha-689539-m03)   </cpu>
	I1205 20:36:10.673197  310801 main.go:141] libmachine: (ha-689539-m03)   <os>
	I1205 20:36:10.673201  310801 main.go:141] libmachine: (ha-689539-m03)     <type>hvm</type>
	I1205 20:36:10.673243  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='cdrom'/>
	I1205 20:36:10.673298  310801 main.go:141] libmachine: (ha-689539-m03)     <boot dev='hd'/>
	I1205 20:36:10.673335  310801 main.go:141] libmachine: (ha-689539-m03)     <bootmenu enable='no'/>
	I1205 20:36:10.673362  310801 main.go:141] libmachine: (ha-689539-m03)   </os>
	I1205 20:36:10.673384  310801 main.go:141] libmachine: (ha-689539-m03)   <devices>
	I1205 20:36:10.673401  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='cdrom'>
	I1205 20:36:10.673424  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/boot2docker.iso'/>
	I1205 20:36:10.673445  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hdc' bus='scsi'/>
	I1205 20:36:10.673458  310801 main.go:141] libmachine: (ha-689539-m03)       <readonly/>
	I1205 20:36:10.673469  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673485  310801 main.go:141] libmachine: (ha-689539-m03)     <disk type='file' device='disk'>
	I1205 20:36:10.673499  310801 main.go:141] libmachine: (ha-689539-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 20:36:10.673516  310801 main.go:141] libmachine: (ha-689539-m03)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/ha-689539-m03.rawdisk'/>
	I1205 20:36:10.673532  310801 main.go:141] libmachine: (ha-689539-m03)       <target dev='hda' bus='virtio'/>
	I1205 20:36:10.673544  310801 main.go:141] libmachine: (ha-689539-m03)     </disk>
	I1205 20:36:10.673556  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673569  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='mk-ha-689539'/>
	I1205 20:36:10.673579  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673592  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673600  310801 main.go:141] libmachine: (ha-689539-m03)     <interface type='network'>
	I1205 20:36:10.673612  310801 main.go:141] libmachine: (ha-689539-m03)       <source network='default'/>
	I1205 20:36:10.673625  310801 main.go:141] libmachine: (ha-689539-m03)       <model type='virtio'/>
	I1205 20:36:10.673635  310801 main.go:141] libmachine: (ha-689539-m03)     </interface>
	I1205 20:36:10.673648  310801 main.go:141] libmachine: (ha-689539-m03)     <serial type='pty'>
	I1205 20:36:10.673660  310801 main.go:141] libmachine: (ha-689539-m03)       <target port='0'/>
	I1205 20:36:10.673672  310801 main.go:141] libmachine: (ha-689539-m03)     </serial>
	I1205 20:36:10.673682  310801 main.go:141] libmachine: (ha-689539-m03)     <console type='pty'>
	I1205 20:36:10.673695  310801 main.go:141] libmachine: (ha-689539-m03)       <target type='serial' port='0'/>
	I1205 20:36:10.673711  310801 main.go:141] libmachine: (ha-689539-m03)     </console>
	I1205 20:36:10.673724  310801 main.go:141] libmachine: (ha-689539-m03)     <rng model='virtio'>
	I1205 20:36:10.673736  310801 main.go:141] libmachine: (ha-689539-m03)       <backend model='random'>/dev/random</backend>
	I1205 20:36:10.673747  310801 main.go:141] libmachine: (ha-689539-m03)     </rng>
	I1205 20:36:10.673756  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673766  310801 main.go:141] libmachine: (ha-689539-m03)     
	I1205 20:36:10.673776  310801 main.go:141] libmachine: (ha-689539-m03)   </devices>
	I1205 20:36:10.673790  310801 main.go:141] libmachine: (ha-689539-m03) </domain>
	I1205 20:36:10.673800  310801 main.go:141] libmachine: (ha-689539-m03) 
	I1205 20:36:10.681042  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:ee:34:51 in network default
	I1205 20:36:10.681639  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring networks are active...
	I1205 20:36:10.681669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:10.682561  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network default is active
	I1205 20:36:10.682898  310801 main.go:141] libmachine: (ha-689539-m03) Ensuring network mk-ha-689539 is active
	I1205 20:36:10.683183  310801 main.go:141] libmachine: (ha-689539-m03) Getting domain xml...
	I1205 20:36:10.684006  310801 main.go:141] libmachine: (ha-689539-m03) Creating domain...
	I1205 20:36:11.968725  310801 main.go:141] libmachine: (ha-689539-m03) Waiting to get IP...
	I1205 20:36:11.969610  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:11.970152  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:11.970185  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:11.970125  311584 retry.go:31] will retry after 234.218675ms: waiting for machine to come up
	I1205 20:36:12.205669  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.206261  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.206294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.206205  311584 retry.go:31] will retry after 248.695417ms: waiting for machine to come up
	I1205 20:36:12.456801  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.457402  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.457438  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.457352  311584 retry.go:31] will retry after 446.513744ms: waiting for machine to come up
	I1205 20:36:12.906122  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:12.906634  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:12.906661  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:12.906574  311584 retry.go:31] will retry after 535.02916ms: waiting for machine to come up
	I1205 20:36:13.443469  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:13.443918  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:13.443943  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:13.443872  311584 retry.go:31] will retry after 557.418366ms: waiting for machine to come up
	I1205 20:36:14.002733  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.003294  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.003322  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.003249  311584 retry.go:31] will retry after 653.304587ms: waiting for machine to come up
	I1205 20:36:14.658664  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:14.659072  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:14.659104  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:14.659017  311584 retry.go:31] will retry after 755.842871ms: waiting for machine to come up
	I1205 20:36:15.416424  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:15.416833  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:15.416859  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:15.416766  311584 retry.go:31] will retry after 1.249096202s: waiting for machine to come up
	I1205 20:36:16.666996  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:16.667456  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:16.667487  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:16.667406  311584 retry.go:31] will retry after 1.829752255s: waiting for machine to come up
	I1205 20:36:18.499154  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:18.499722  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:18.499754  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:18.499656  311584 retry.go:31] will retry after 2.088301292s: waiting for machine to come up
	I1205 20:36:20.590033  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:20.590599  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:20.590952  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:20.590835  311584 retry.go:31] will retry after 2.856395806s: waiting for machine to come up
	I1205 20:36:23.448567  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:23.449151  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:23.449196  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:23.449071  311584 retry.go:31] will retry after 2.566118647s: waiting for machine to come up
	I1205 20:36:26.016596  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:26.017066  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:26.017103  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:26.017002  311584 retry.go:31] will retry after 3.311993098s: waiting for machine to come up
	I1205 20:36:29.332519  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:29.333028  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find current IP address of domain ha-689539-m03 in network mk-ha-689539
	I1205 20:36:29.333062  310801 main.go:141] libmachine: (ha-689539-m03) DBG | I1205 20:36:29.332969  311584 retry.go:31] will retry after 5.069674559s: waiting for machine to come up
	I1205 20:36:34.404036  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404592  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has current primary IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.404615  310801 main.go:141] libmachine: (ha-689539-m03) Found IP for machine: 192.168.39.133
	I1205 20:36:34.404628  310801 main.go:141] libmachine: (ha-689539-m03) Reserving static IP address...
	I1205 20:36:34.405246  310801 main.go:141] libmachine: (ha-689539-m03) DBG | unable to find host DHCP lease matching {name: "ha-689539-m03", mac: "52:54:00:39:1e:d2", ip: "192.168.39.133"} in network mk-ha-689539
	I1205 20:36:34.488202  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Getting to WaitForSSH function...
	I1205 20:36:34.488243  310801 main.go:141] libmachine: (ha-689539-m03) Reserved static IP address: 192.168.39.133
	I1205 20:36:34.488263  310801 main.go:141] libmachine: (ha-689539-m03) Waiting for SSH to be available...
	I1205 20:36:34.491165  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491686  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:minikube Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.491716  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.491906  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH client type: external
	I1205 20:36:34.491935  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa (-rw-------)
	I1205 20:36:34.491973  310801 main.go:141] libmachine: (ha-689539-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.133 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 20:36:34.491988  310801 main.go:141] libmachine: (ha-689539-m03) DBG | About to run SSH command:
	I1205 20:36:34.492018  310801 main.go:141] libmachine: (ha-689539-m03) DBG | exit 0
	I1205 20:36:34.613832  310801 main.go:141] libmachine: (ha-689539-m03) DBG | SSH cmd err, output: <nil>: 
	I1205 20:36:34.614085  310801 main.go:141] libmachine: (ha-689539-m03) KVM machine creation complete!
	I1205 20:36:34.614391  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:34.614932  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615098  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:34.615251  310801 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 20:36:34.615261  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetState
	I1205 20:36:34.616613  310801 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 20:36:34.616630  310801 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 20:36:34.616635  310801 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 20:36:34.616641  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.618898  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619343  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.619376  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.619553  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.619760  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.619916  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.620049  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.620212  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.620459  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.620479  310801 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 20:36:34.717073  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:34.717099  310801 main.go:141] libmachine: Detecting the provisioner...
	I1205 20:36:34.717108  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.720008  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720375  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.720408  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.720627  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.720862  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721027  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.721142  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.721315  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.721505  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.721517  310801 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 20:36:34.822906  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 20:36:34.822984  310801 main.go:141] libmachine: found compatible host: buildroot
	I1205 20:36:34.822991  310801 main.go:141] libmachine: Provisioning with buildroot...
	I1205 20:36:34.823000  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823269  310801 buildroot.go:166] provisioning hostname "ha-689539-m03"
	I1205 20:36:34.823307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:34.823547  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.826120  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826479  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.826516  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.826688  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.826881  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.827117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.827324  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.827499  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.827512  310801 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539-m03 && echo "ha-689539-m03" | sudo tee /etc/hostname
	I1205 20:36:34.941581  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539-m03
	
	I1205 20:36:34.941620  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:34.944840  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:34.945268  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:34.945576  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:34.945808  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:34.946279  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:34.946488  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:34.946701  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:34.946720  310801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:36:35.058548  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:36:35.058600  310801 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:36:35.058628  310801 buildroot.go:174] setting up certificates
	I1205 20:36:35.058647  310801 provision.go:84] configureAuth start
	I1205 20:36:35.058666  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetMachineName
	I1205 20:36:35.059012  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.062020  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062410  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.062436  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.062601  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.064649  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065013  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.065056  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.065157  310801 provision.go:143] copyHostCerts
	I1205 20:36:35.065216  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065250  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:36:35.065260  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:36:35.065330  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:36:35.065453  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065483  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:36:35.065487  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:36:35.065514  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:36:35.065573  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065599  310801 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:36:35.065606  310801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:36:35.065628  310801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:36:35.065689  310801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539-m03 san=[127.0.0.1 192.168.39.133 ha-689539-m03 localhost minikube]
	I1205 20:36:35.249027  310801 provision.go:177] copyRemoteCerts
	I1205 20:36:35.249088  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:36:35.249117  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.252102  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252464  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.252504  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.252651  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.252859  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.253052  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.253206  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.336527  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:36:35.336648  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:36:35.364926  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:36:35.365010  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 20:36:35.389088  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:36:35.389182  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:36:35.413330  310801 provision.go:87] duration metric: took 354.660436ms to configureAuth
	I1205 20:36:35.413369  310801 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:36:35.413628  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:35.413732  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.416617  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417048  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.417083  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.417297  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.417511  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.417805  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.417979  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.418155  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.418171  310801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:36:35.630886  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:36:35.630926  310801 main.go:141] libmachine: Checking connection to Docker...
	I1205 20:36:35.630937  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetURL
	I1205 20:36:35.632212  310801 main.go:141] libmachine: (ha-689539-m03) DBG | Using libvirt version 6000000
	I1205 20:36:35.634750  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635203  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.635240  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.635427  310801 main.go:141] libmachine: Docker is up and running!
	I1205 20:36:35.635448  310801 main.go:141] libmachine: Reticulating splines...
	I1205 20:36:35.635459  310801 client.go:171] duration metric: took 25.402508958s to LocalClient.Create
	I1205 20:36:35.635491  310801 start.go:167] duration metric: took 25.402598488s to libmachine.API.Create "ha-689539"
	I1205 20:36:35.635506  310801 start.go:293] postStartSetup for "ha-689539-m03" (driver="kvm2")
	I1205 20:36:35.635522  310801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:36:35.635550  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.635824  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:36:35.635854  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.638327  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638682  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.638711  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.638841  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.639048  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.639222  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.639398  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.716587  310801 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:36:35.720718  310801 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:36:35.720755  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:36:35.720843  310801 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:36:35.720950  310801 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:36:35.720963  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:36:35.721055  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:36:35.730580  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:35.754106  310801 start.go:296] duration metric: took 118.58052ms for postStartSetup
	I1205 20:36:35.754171  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetConfigRaw
	I1205 20:36:35.754838  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.757466  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.757836  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.757867  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.758185  310801 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:36:35.758409  310801 start.go:128] duration metric: took 25.547174356s to createHost
	I1205 20:36:35.758437  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.760535  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.760919  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.760950  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.761090  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.761312  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761499  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.761662  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.761847  310801 main.go:141] libmachine: Using SSH client type: native
	I1205 20:36:35.762082  310801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.133 22 <nil> <nil>}
	I1205 20:36:35.762095  310801 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:36:35.859212  310801 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733430995.835523026
	
	I1205 20:36:35.859238  310801 fix.go:216] guest clock: 1733430995.835523026
	I1205 20:36:35.859249  310801 fix.go:229] Guest: 2024-12-05 20:36:35.835523026 +0000 UTC Remote: 2024-12-05 20:36:35.758424054 +0000 UTC m=+147.726301003 (delta=77.098972ms)
	I1205 20:36:35.859274  310801 fix.go:200] guest clock delta is within tolerance: 77.098972ms
	I1205 20:36:35.859282  310801 start.go:83] releasing machines lock for "ha-689539-m03", held for 25.648163663s
	I1205 20:36:35.859307  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.859602  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:35.862387  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.862741  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.862765  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.864694  310801 out.go:177] * Found network options:
	I1205 20:36:35.865935  310801 out.go:177]   - NO_PROXY=192.168.39.220,192.168.39.224
	W1205 20:36:35.866955  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.866981  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.867029  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867701  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.867901  310801 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:36:35.868027  310801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:36:35.868079  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	W1205 20:36:35.868103  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	W1205 20:36:35.868132  310801 proxy.go:119] fail to check proxy env: Error ip not in block
	I1205 20:36:35.868211  310801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:36:35.868237  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:36:35.870846  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.870889  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871236  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871267  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871290  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:35.871306  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:35.871412  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871420  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:36:35.871631  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871634  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:36:35.871849  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.871887  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:36:35.872025  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:35.872048  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:36:36.107172  310801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:36:36.113768  310801 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:36:36.113852  310801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:36:36.130072  310801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 20:36:36.130105  310801 start.go:495] detecting cgroup driver to use...
	I1205 20:36:36.130199  310801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:36:36.146210  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:36:36.161285  310801 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:36:36.161367  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:36:36.177064  310801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:36:36.191545  310801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:36:36.311400  310801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:36:36.466588  310801 docker.go:233] disabling docker service ...
	I1205 20:36:36.466685  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:36:36.482756  310801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:36:36.496706  310801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:36:36.652172  310801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:36:36.763760  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:36:36.778126  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:36:36.798464  310801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:36:36.798550  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.809701  310801 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:36:36.809789  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.821480  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.833057  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.844011  310801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:36:36.855643  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.866916  310801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.884661  310801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:36:36.895900  310801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:36:36.907780  310801 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 20:36:36.907872  310801 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 20:36:36.923847  310801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:36:36.935618  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:37.050068  310801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:36:37.145134  310801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:36:37.145210  310801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:36:37.149942  310801 start.go:563] Will wait 60s for crictl version
	I1205 20:36:37.150018  310801 ssh_runner.go:195] Run: which crictl
	I1205 20:36:37.153774  310801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:36:37.191365  310801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:36:37.191476  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.218944  310801 ssh_runner.go:195] Run: crio --version
	I1205 20:36:37.247248  310801 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:36:37.248847  310801 out.go:177]   - env NO_PROXY=192.168.39.220
	I1205 20:36:37.250408  310801 out.go:177]   - env NO_PROXY=192.168.39.220,192.168.39.224
	I1205 20:36:37.251670  310801 main.go:141] libmachine: (ha-689539-m03) Calling .GetIP
	I1205 20:36:37.254710  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255219  310801 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:36:37.255255  310801 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:36:37.255473  310801 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:36:37.259811  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:37.272313  310801 mustload.go:65] Loading cluster: ha-689539
	I1205 20:36:37.272621  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:36:37.272965  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.273029  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.288738  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35053
	I1205 20:36:37.289258  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.289795  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.289824  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.290243  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.290461  310801 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:36:37.292309  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:37.292619  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:37.292658  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:37.308415  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34715
	I1205 20:36:37.308950  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:37.309550  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:37.309579  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:37.309955  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:37.310189  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:37.310389  310801 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.133
	I1205 20:36:37.310408  310801 certs.go:194] generating shared ca certs ...
	I1205 20:36:37.310434  310801 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.310698  310801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:36:37.310756  310801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:36:37.310770  310801 certs.go:256] generating profile certs ...
	I1205 20:36:37.310865  310801 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:36:37.310896  310801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf
	I1205 20:36:37.310913  310801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:36:37.437144  310801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf ...
	I1205 20:36:37.437188  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf: {Name:mk0c5897cd83a4093b7a3399e7e587e00b7a5bae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437391  310801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf ...
	I1205 20:36:37.437408  310801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf: {Name:mk1d8d484e615bf29a9b64d40295dea265ce443e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:36:37.437485  310801 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:36:37.437626  310801 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.5ed8c3bf -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:36:37.437756  310801 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:36:37.437772  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:36:37.437788  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:36:37.437801  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:36:37.437813  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:36:37.437826  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:36:37.437841  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:36:37.437853  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:36:37.437864  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:36:37.437944  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:36:37.437979  310801 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:36:37.437990  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:36:37.438014  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:36:37.438035  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:36:37.438056  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:36:37.438094  310801 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:36:37.438120  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:36:37.438137  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:37.438154  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:36:37.438200  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:37.441695  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442183  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:37.442215  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:37.442405  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:37.442622  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:37.442798  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:37.443004  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:37.518292  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I1205 20:36:37.523367  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I1205 20:36:37.534644  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I1205 20:36:37.538903  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I1205 20:36:37.550288  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I1205 20:36:37.554639  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I1205 20:36:37.564857  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I1205 20:36:37.569390  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I1205 20:36:37.579805  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I1205 20:36:37.583826  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I1205 20:36:37.594623  310801 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I1205 20:36:37.598518  310801 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I1205 20:36:37.609622  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:36:37.635232  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:36:37.659198  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:36:37.684613  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:36:37.709156  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I1205 20:36:37.734432  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 20:36:37.759134  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:36:37.782683  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:36:37.806069  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:36:37.829365  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:36:37.854671  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:36:37.877683  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I1205 20:36:37.895648  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I1205 20:36:37.911843  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I1205 20:36:37.928819  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I1205 20:36:37.945608  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I1205 20:36:37.961295  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I1205 20:36:37.977148  310801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I1205 20:36:37.993888  310801 ssh_runner.go:195] Run: openssl version
	I1205 20:36:37.999493  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:36:38.010566  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014911  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.014995  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:36:38.021306  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:36:38.033265  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:36:38.045021  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049577  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.049655  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:36:38.055689  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:36:38.066840  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:36:38.077747  310801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082720  310801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.082788  310801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:36:38.088581  310801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:36:38.099228  310801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:36:38.103604  310801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 20:36:38.103672  310801 kubeadm.go:934] updating node {m03 192.168.39.133 8443 v1.31.2 crio true true} ...
	I1205 20:36:38.103798  310801 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.133
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:36:38.103838  310801 kube-vip.go:115] generating kube-vip config ...
	I1205 20:36:38.103889  310801 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:36:38.119642  310801 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:36:38.119740  310801 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:36:38.119812  310801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.130177  310801 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.31.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.31.2': No such file or directory
	
	Initiating transfer...
	I1205 20:36:38.130245  310801 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.31.2
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubeadm.sha256
	I1205 20:36:38.140746  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
	I1205 20:36:38.140783  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm -> /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140794  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl -> /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140777  310801 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubelet.sha256
	I1205 20:36:38.140857  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm
	I1205 20:36:38.140859  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl
	I1205 20:36:38.140888  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:36:38.158074  310801 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet -> /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.158135  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubectl': No such file or directory
	I1205 20:36:38.158086  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubeadm': No such file or directory
	I1205 20:36:38.158177  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl --> /var/lib/minikube/binaries/v1.31.2/kubectl (56381592 bytes)
	I1205 20:36:38.158206  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubeadm --> /var/lib/minikube/binaries/v1.31.2/kubeadm (58290328 bytes)
	I1205 20:36:38.158247  310801 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet
	I1205 20:36:38.186188  310801 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.31.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.31.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.31.2/kubelet': No such file or directory
	I1205 20:36:38.186252  310801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubelet --> /var/lib/minikube/binaries/v1.31.2/kubelet (76902744 bytes)
	I1205 20:36:39.060124  310801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I1205 20:36:39.071107  310801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 20:36:39.088307  310801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:36:39.105414  310801 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:36:39.123515  310801 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:36:39.128382  310801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 20:36:39.141817  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:36:39.272056  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:36:39.288864  310801 host.go:66] Checking if "ha-689539" exists ...
	I1205 20:36:39.289220  310801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:36:39.289280  310801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:36:39.306323  310801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45603
	I1205 20:36:39.306810  310801 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:36:39.307385  310801 main.go:141] libmachine: Using API Version  1
	I1205 20:36:39.307405  310801 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:36:39.307730  310801 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:36:39.308000  310801 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:36:39.308176  310801 start.go:317] joinCluster: &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cluster
Name:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:36:39.308320  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I1205 20:36:39.308347  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:36:39.311767  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312246  310801 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:36:39.312274  310801 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:36:39.312449  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:36:39.312636  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:36:39.312767  310801 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:36:39.312941  310801 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:36:39.465515  310801 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:36:39.465587  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443"
	I1205 20:37:01.441014  310801 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 1ecy7b.k9yq24j2shqxopt1 --discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-689539-m03 --control-plane --apiserver-advertise-address=192.168.39.133 --apiserver-bind-port=8443": (21.975379722s)
	I1205 20:37:01.441134  310801 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I1205 20:37:02.017063  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-689539-m03 minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=ha-689539 minikube.k8s.io/primary=false
	I1205 20:37:02.122818  310801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-689539-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I1205 20:37:02.233408  310801 start.go:319] duration metric: took 22.92521337s to joinCluster
	I1205 20:37:02.233514  310801 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 20:37:02.233929  310801 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:37:02.235271  310801 out.go:177] * Verifying Kubernetes components...
	I1205 20:37:02.236630  310801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:37:02.508423  310801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:37:02.527064  310801 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:37:02.527473  310801 kapi.go:59] client config for ha-689539: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W1205 20:37:02.527594  310801 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.220:8443
	I1205 20:37:02.527913  310801 node_ready.go:35] waiting up to 6m0s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:02.528026  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:02.528040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:02.528051  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:02.528056  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:02.557537  310801 round_trippers.go:574] Response Status: 200 OK in 29 milliseconds
	I1205 20:37:03.028186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.028214  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.028223  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.028228  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.031783  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:03.528844  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:03.528876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:03.528889  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:03.528897  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:03.532449  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.028344  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.028385  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.028391  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.031602  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:04.528319  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:04.528352  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:04.528375  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:04.528382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:04.532891  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:04.534060  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:05.028293  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.028328  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.028339  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.028344  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.032338  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:05.529271  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:05.529311  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:05.529323  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:05.529330  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:05.533411  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:06.028510  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.028536  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.028545  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.028550  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.032362  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:06.529188  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:06.529215  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:06.529224  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:06.529229  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:06.533150  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.029082  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.029108  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.029117  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.029120  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.033089  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:07.033768  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:07.528440  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:07.528471  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:07.528481  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:07.528485  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:07.531953  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.028337  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.028382  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.028395  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.028399  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.031906  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:08.528836  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:08.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:08.528876  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:08.528881  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:08.532443  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.028243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.028270  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.028278  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.028286  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.031717  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.528911  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:09.528939  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:09.528948  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:09.528953  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:09.532309  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:09.532990  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:10.028349  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.028377  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.028386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.028390  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.031930  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:10.528611  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:10.528635  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:10.528645  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:10.528650  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:10.532023  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.028888  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.028914  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.028923  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.028928  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.032482  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:11.528496  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:11.528521  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:11.528530  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:11.528534  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:11.532719  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:11.533217  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:12.028518  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.028550  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.028559  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.028562  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.031616  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:12.528837  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:12.528864  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:12.528873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:12.528876  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:12.532925  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:13.028348  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.028374  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.028382  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.028385  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.031413  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:13.528247  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:13.528272  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:13.528282  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:13.528289  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:13.531837  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.028958  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.028983  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.028991  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.028994  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.032387  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:14.032980  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:14.528243  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:14.528268  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:14.528276  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:14.528281  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:14.533135  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:15.029156  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.029181  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.029190  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.029194  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.032772  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:15.528703  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:15.528727  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:15.528736  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:15.528740  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:15.532084  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.029136  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.029163  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.029172  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.029177  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.032419  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:16.033160  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:16.528509  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:16.528535  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:16.528546  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:16.528553  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:16.532163  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.028228  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.028256  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.028265  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.028270  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.031611  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:17.528262  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:17.528285  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:17.528294  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:17.528298  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:17.532186  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:18.028484  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.028590  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.028610  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.028619  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.032661  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:18.033298  310801 node_ready.go:53] node "ha-689539-m03" has status "Ready":"False"
	I1205 20:37:18.528576  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:18.528603  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:18.528612  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:18.528622  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:18.531605  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.028544  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.028570  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.028579  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.028583  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.031945  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.528716  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.528741  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.528752  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.528758  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.532114  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.532722  310801 node_ready.go:49] node "ha-689539-m03" has status "Ready":"True"
	I1205 20:37:19.532746  310801 node_ready.go:38] duration metric: took 17.004806597s for node "ha-689539-m03" to be "Ready" ...
	I1205 20:37:19.532759  310801 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:19.532848  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:19.532862  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.532873  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.532877  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.538433  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:19.545193  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.545310  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-4ln9l
	I1205 20:37:19.545322  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.545335  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.545343  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.548548  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.549181  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.549197  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.549208  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.549214  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.551745  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.552315  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.552336  310801 pod_ready.go:82] duration metric: took 7.114081ms for pod "coredns-7c65d6cfc9-4ln9l" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552347  310801 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.552426  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/coredns-7c65d6cfc9-6qhhf
	I1205 20:37:19.552436  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.552443  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.552449  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.555044  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.555688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.555703  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.555714  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.555719  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.558507  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.558964  310801 pod_ready.go:93] pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.558984  310801 pod_ready.go:82] duration metric: took 6.630508ms for pod "coredns-7c65d6cfc9-6qhhf" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.558996  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.559064  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539
	I1205 20:37:19.559075  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.559086  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.559093  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.561702  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.562346  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:19.562362  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.562373  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.562379  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.564859  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.565270  310801 pod_ready.go:93] pod "etcd-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.565289  310801 pod_ready.go:82] duration metric: took 6.285995ms for pod "etcd-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565301  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.565364  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m02
	I1205 20:37:19.565376  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.565386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.565394  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.567843  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.568351  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:19.568369  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.568381  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.568386  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.570730  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:19.571216  310801 pod_ready.go:93] pod "etcd-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.571233  310801 pod_ready.go:82] duration metric: took 5.925226ms for pod "etcd-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.571242  310801 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.729689  310801 request.go:632] Waited for 158.375356ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729775  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/etcd-ha-689539-m03
	I1205 20:37:19.729781  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.729791  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.729798  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.733549  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.929796  310801 request.go:632] Waited for 195.378991ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:19.929889  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:19.929915  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:19.929920  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:19.933398  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:19.934088  310801 pod_ready.go:93] pod "etcd-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:19.934113  310801 pod_ready.go:82] duration metric: took 362.864968ms for pod "etcd-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:19.934133  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.129093  310801 request.go:632] Waited for 194.866664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129174  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539
	I1205 20:37:20.129180  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.129188  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.129192  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.132632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.329356  310801 request.go:632] Waited for 195.935231ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329441  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:20.329451  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.329463  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.329476  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.333292  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.333939  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.333972  310801 pod_ready.go:82] duration metric: took 399.826342ms for pod "kube-apiserver-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.333988  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.529058  310801 request.go:632] Waited for 194.978446ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529147  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m02
	I1205 20:37:20.529166  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.529197  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.529204  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.532832  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.729074  310801 request.go:632] Waited for 195.37241ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729139  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:20.729144  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.729153  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.729156  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.733037  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:20.733831  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:20.733861  310801 pod_ready.go:82] duration metric: took 399.862982ms for pod "kube-apiserver-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.733880  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:20.928790  310801 request.go:632] Waited for 194.758856ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928868  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-689539-m03
	I1205 20:37:20.928876  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:20.928884  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:20.928894  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:20.931768  310801 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I1205 20:37:21.128920  310801 request.go:632] Waited for 196.30741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129013  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:21.129018  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.129026  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.129030  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.132989  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.133733  310801 pod_ready.go:93] pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.133764  310801 pod_ready.go:82] duration metric: took 399.87672ms for pod "kube-apiserver-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.133777  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.329719  310801 request.go:632] Waited for 195.840899ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329822  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539
	I1205 20:37:21.329829  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.329840  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.329848  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.335472  310801 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I1205 20:37:21.529593  310801 request.go:632] Waited for 193.3652ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529688  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:21.529700  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.529710  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.529721  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.533118  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.533743  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.533773  310801 pod_ready.go:82] duration metric: took 399.989891ms for pod "kube-controller-manager-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.533788  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.729770  310801 request.go:632] Waited for 195.887392ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729855  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m02
	I1205 20:37:21.729863  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.729871  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.729877  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.733541  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:21.929705  310801 request.go:632] Waited for 195.397002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929774  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:21.929779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:21.929787  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:21.929792  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:21.933945  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:21.935117  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:21.935147  310801 pod_ready.go:82] duration metric: took 401.346008ms for pod "kube-controller-manager-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:21.935163  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.129158  310801 request.go:632] Waited for 193.90126ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129263  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-689539-m03
	I1205 20:37:22.129281  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.129291  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.129295  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.132774  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.329309  310801 request.go:632] Waited for 195.820597ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329371  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:22.329397  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.329412  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.329417  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.332841  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.336218  310801 pod_ready.go:93] pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.336243  310801 pod_ready.go:82] duration metric: took 401.071031ms for pod "kube-controller-manager-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.336259  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.528770  310801 request.go:632] Waited for 192.411741ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528833  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9tslx
	I1205 20:37:22.528838  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.528846  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.528850  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.531900  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.729073  310801 request.go:632] Waited for 196.313572ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729186  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:22.729196  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.729206  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.729212  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.732421  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:22.733074  310801 pod_ready.go:93] pod "kube-proxy-9tslx" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:22.733099  310801 pod_ready.go:82] duration metric: took 396.833211ms for pod "kube-proxy-9tslx" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.733111  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:22.929342  310801 request.go:632] Waited for 196.122694ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929410  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-dktwc
	I1205 20:37:22.929416  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:22.929425  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:22.929430  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:22.932878  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.129758  310801 request.go:632] Waited for 196.113609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129841  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:23.129849  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.129861  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.129874  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.133246  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.133786  310801 pod_ready.go:93] pod "kube-proxy-dktwc" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.133805  310801 pod_ready.go:82] duration metric: took 400.688784ms for pod "kube-proxy-dktwc" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.133815  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.329685  310801 request.go:632] Waited for 195.763713ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329769  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-proxy-x2grl
	I1205 20:37:23.329779  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.329788  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.329795  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.333599  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.528890  310801 request.go:632] Waited for 194.302329ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528951  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:23.528955  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.528966  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.528973  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.533840  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:23.534667  310801 pod_ready.go:93] pod "kube-proxy-x2grl" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.534691  310801 pod_ready.go:82] duration metric: took 400.868432ms for pod "kube-proxy-x2grl" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.534705  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.728815  310801 request.go:632] Waited for 194.018306ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728883  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539
	I1205 20:37:23.728888  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.728896  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.728900  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.732452  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.929580  310801 request.go:632] Waited for 196.394135ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929653  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539
	I1205 20:37:23.929659  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:23.929667  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:23.929672  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:23.933364  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:23.934147  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:23.934174  310801 pod_ready.go:82] duration metric: took 399.459723ms for pod "kube-scheduler-ha-689539" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:23.934191  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.129685  310801 request.go:632] Waited for 195.380858ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129776  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m02
	I1205 20:37:24.129789  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.129800  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.129811  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.133305  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.329438  310801 request.go:632] Waited for 195.320628ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329517  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m02
	I1205 20:37:24.329525  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.329544  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.329550  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.333177  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.333763  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.333790  310801 pod_ready.go:82] duration metric: took 399.589908ms for pod "kube-scheduler-ha-689539-m02" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.333806  310801 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.528866  310801 request.go:632] Waited for 194.951078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528969  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-689539-m03
	I1205 20:37:24.528982  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.528997  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.529004  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.532632  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.729734  310801 request.go:632] Waited for 196.398947ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729824  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes/ha-689539-m03
	I1205 20:37:24.729835  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.729847  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.729855  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.733450  310801 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I1205 20:37:24.734057  310801 pod_ready.go:93] pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace has status "Ready":"True"
	I1205 20:37:24.734085  310801 pod_ready.go:82] duration metric: took 400.271075ms for pod "kube-scheduler-ha-689539-m03" in "kube-system" namespace to be "Ready" ...
	I1205 20:37:24.734104  310801 pod_ready.go:39] duration metric: took 5.201330389s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 20:37:24.734128  310801 api_server.go:52] waiting for apiserver process to appear ...
	I1205 20:37:24.734202  310801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 20:37:24.752010  310801 api_server.go:72] duration metric: took 22.518451158s to wait for apiserver process to appear ...
	I1205 20:37:24.752054  310801 api_server.go:88] waiting for apiserver healthz status ...
	I1205 20:37:24.752086  310801 api_server.go:253] Checking apiserver healthz at https://192.168.39.220:8443/healthz ...
	I1205 20:37:24.756435  310801 api_server.go:279] https://192.168.39.220:8443/healthz returned 200:
	ok
	I1205 20:37:24.756538  310801 round_trippers.go:463] GET https://192.168.39.220:8443/version
	I1205 20:37:24.756551  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.756561  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.756569  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.757464  310801 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I1205 20:37:24.757533  310801 api_server.go:141] control plane version: v1.31.2
	I1205 20:37:24.757548  310801 api_server.go:131] duration metric: took 5.486922ms to wait for apiserver health ...
	I1205 20:37:24.757559  310801 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 20:37:24.928965  310801 request.go:632] Waited for 171.296323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929035  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:24.929040  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:24.929049  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:24.929054  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:24.935151  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:24.941691  310801 system_pods.go:59] 24 kube-system pods found
	I1205 20:37:24.941733  310801 system_pods.go:61] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:24.941739  310801 system_pods.go:61] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:24.941742  310801 system_pods.go:61] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:24.941746  310801 system_pods.go:61] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:24.941752  310801 system_pods.go:61] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:24.941756  310801 system_pods.go:61] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:24.941759  310801 system_pods.go:61] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:24.941763  310801 system_pods.go:61] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:24.941766  310801 system_pods.go:61] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:24.941770  310801 system_pods.go:61] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:24.941815  310801 system_pods.go:61] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:24.941833  310801 system_pods.go:61] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:24.941841  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:24.941847  310801 system_pods.go:61] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:24.941854  310801 system_pods.go:61] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:24.941860  310801 system_pods.go:61] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:24.941869  310801 system_pods.go:61] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:24.941875  310801 system_pods.go:61] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:24.941883  310801 system_pods.go:61] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:24.941889  310801 system_pods.go:61] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:24.941915  310801 system_pods.go:61] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:24.941922  310801 system_pods.go:61] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:24.941930  310801 system_pods.go:61] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:24.941939  310801 system_pods.go:61] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:24.941947  310801 system_pods.go:74] duration metric: took 184.37937ms to wait for pod list to return data ...
	I1205 20:37:24.941962  310801 default_sa.go:34] waiting for default service account to be created ...
	I1205 20:37:25.129425  310801 request.go:632] Waited for 187.3488ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129501  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/default/serviceaccounts
	I1205 20:37:25.129507  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.129515  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.129519  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.133730  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.133919  310801 default_sa.go:45] found service account: "default"
	I1205 20:37:25.133941  310801 default_sa.go:55] duration metric: took 191.967731ms for default service account to be created ...
	I1205 20:37:25.133958  310801 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 20:37:25.329286  310801 request.go:632] Waited for 195.223367ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329372  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/namespaces/kube-system/pods
	I1205 20:37:25.329380  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.329392  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.329406  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.335635  310801 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I1205 20:37:25.341932  310801 system_pods.go:86] 24 kube-system pods found
	I1205 20:37:25.341974  310801 system_pods.go:89] "coredns-7c65d6cfc9-4ln9l" [f86a233b-c3f8-416b-ac76-f18dac2a1a2c] Running
	I1205 20:37:25.341980  310801 system_pods.go:89] "coredns-7c65d6cfc9-6qhhf" [4ffff988-65eb-4585-8ce4-de4df28c6b82] Running
	I1205 20:37:25.341986  310801 system_pods.go:89] "etcd-ha-689539" [f8de63bf-a7cf-431d-bd57-ec91b43c6ce3] Running
	I1205 20:37:25.341990  310801 system_pods.go:89] "etcd-ha-689539-m02" [a0336d41-b57f-414b-aa98-2540bdde7ca0] Running
	I1205 20:37:25.341993  310801 system_pods.go:89] "etcd-ha-689539-m03" [5f491cae-394b-445a-9c1a-f4c144debab9] Running
	I1205 20:37:25.341996  310801 system_pods.go:89] "kindnet-62qw6" [9f0039aa-d5e2-49b9-adb4-ad93c96d22f0] Running
	I1205 20:37:25.342000  310801 system_pods.go:89] "kindnet-8kgs2" [d268fa7f-9d0f-400e-88ff-4acc47d4b6a0] Running
	I1205 20:37:25.342003  310801 system_pods.go:89] "kindnet-b7bf2" [ea96240c-48bf-4f92-b12c-f8e623a59784] Running
	I1205 20:37:25.342008  310801 system_pods.go:89] "kube-apiserver-ha-689539" [ecbcba0b-10ce-4bd6-84f6-8b46c3d99ad6] Running
	I1205 20:37:25.342011  310801 system_pods.go:89] "kube-apiserver-ha-689539-m02" [0c0d9613-c605-4e61-b778-c5aefa5919e9] Running
	I1205 20:37:25.342015  310801 system_pods.go:89] "kube-apiserver-ha-689539-m03" [35037a19-9a1e-4ccb-aeb6-bd098910d94d] Running
	I1205 20:37:25.342018  310801 system_pods.go:89] "kube-controller-manager-ha-689539" [859c6551-f504-4093-a730-2ba8f127e3e7] Running
	I1205 20:37:25.342022  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m02" [0b119866-007c-4c4e-abfa-a38405b85cc9] Running
	I1205 20:37:25.342025  310801 system_pods.go:89] "kube-controller-manager-ha-689539-m03" [cc37de8a-b988-43a4-9dbe-18dd127bc38b] Running
	I1205 20:37:25.342029  310801 system_pods.go:89] "kube-proxy-9tslx" [3d107dc4-2d8c-4e0d-aafc-5229161537df] Running
	I1205 20:37:25.342035  310801 system_pods.go:89] "kube-proxy-dktwc" [5facc855-07f1-46f3-9862-a8c6ac01897c] Running
	I1205 20:37:25.342039  310801 system_pods.go:89] "kube-proxy-x2grl" [20dd0c16-858c-4d07-8305-ffedb52a4ee1] Running
	I1205 20:37:25.342043  310801 system_pods.go:89] "kube-scheduler-ha-689539" [2ba99954-c00c-4fa6-af5d-6d4725fa051a] Running
	I1205 20:37:25.342047  310801 system_pods.go:89] "kube-scheduler-ha-689539-m02" [d1ad2b21-b52c-47dd-ab09-2368ffeb3c7e] Running
	I1205 20:37:25.342053  310801 system_pods.go:89] "kube-scheduler-ha-689539-m03" [fc913aa4-561d-4466-b7c3-acd3d23ffa1a] Running
	I1205 20:37:25.342056  310801 system_pods.go:89] "kube-vip-ha-689539" [345f79e6-90ea-47f8-9e7f-c461a1143ba0] Running
	I1205 20:37:25.342059  310801 system_pods.go:89] "kube-vip-ha-689539-m02" [265c4a3f-0e44-43fd-bcee-35513e8e2525] Running
	I1205 20:37:25.342063  310801 system_pods.go:89] "kube-vip-ha-689539-m03" [c37018e8-e3e3-4c9e-aa57-64571b08be92] Running
	I1205 20:37:25.342067  310801 system_pods.go:89] "storage-provisioner" [e2a03e66-0718-48a3-9658-f70118ce6cae] Running
	I1205 20:37:25.342077  310801 system_pods.go:126] duration metric: took 208.11212ms to wait for k8s-apps to be running ...
	I1205 20:37:25.342087  310801 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 20:37:25.342141  310801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 20:37:25.359925  310801 system_svc.go:56] duration metric: took 17.820163ms WaitForService to wait for kubelet
	I1205 20:37:25.359969  310801 kubeadm.go:582] duration metric: took 23.126420152s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:37:25.359998  310801 node_conditions.go:102] verifying NodePressure condition ...
	I1205 20:37:25.529464  310801 request.go:632] Waited for 169.34708ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529531  310801 round_trippers.go:463] GET https://192.168.39.220:8443/api/v1/nodes
	I1205 20:37:25.529543  310801 round_trippers.go:469] Request Headers:
	I1205 20:37:25.529553  310801 round_trippers.go:473]     Accept: application/json, */*
	I1205 20:37:25.529558  310801 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I1205 20:37:25.534297  310801 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I1205 20:37:25.535249  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535281  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535294  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535298  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535302  310801 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 20:37:25.535306  310801 node_conditions.go:123] node cpu capacity is 2
	I1205 20:37:25.535318  310801 node_conditions.go:105] duration metric: took 175.313275ms to run NodePressure ...
	I1205 20:37:25.535339  310801 start.go:241] waiting for startup goroutines ...
	I1205 20:37:25.535367  310801 start.go:255] writing updated cluster config ...
	I1205 20:37:25.535725  310801 ssh_runner.go:195] Run: rm -f paused
	I1205 20:37:25.590118  310801 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 20:37:25.592310  310801 out.go:177] * Done! kubectl is now configured to use "ha-689539" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.922389148Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431284922361265,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9320496e-e872-4dd9-8350-1ba6d234d40b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.922957591Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=776597bb-1675-4c43-9488-e54450ae984e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.923040706Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=776597bb-1675-4c43-9488-e54450ae984e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.923383298Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=776597bb-1675-4c43-9488-e54450ae984e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.964458280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd65685e-8d03-44ad-afe0-124a980412fc name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.964555695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd65685e-8d03-44ad-afe0-124a980412fc name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.965490227Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a7acc1b7-ee36-488b-8f31-7a64bee4aab3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.965936029Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431284965912208,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a7acc1b7-ee36-488b-8f31-7a64bee4aab3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.966431016Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8185c58b-b61a-4f7b-9b96-e570d445b49a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.966487728Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8185c58b-b61a-4f7b-9b96-e570d445b49a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:24 ha-689539 crio[658]: time="2024-12-05 20:41:24.966744520Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8185c58b-b61a-4f7b-9b96-e570d445b49a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.010264237Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=35fa63c2-9585-4338-a782-a9b5c7b038b0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.010369178Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=35fa63c2-9585-4338-a782-a9b5c7b038b0 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.011891748Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ca849849-e67d-40e4-8eb3-2df88f3a3489 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.012423870Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431285012399155,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca849849-e67d-40e4-8eb3-2df88f3a3489 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.012932386Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=01d8991d-3e58-4c90-8570-5530b317f28e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.013002017Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=01d8991d-3e58-4c90-8570-5530b317f28e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.013316184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=01d8991d-3e58-4c90-8570-5530b317f28e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.052594301Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0dcdec28-e035-4fc9-9e43-b7b47bcfbe67 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.052675874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0dcdec28-e035-4fc9-9e43-b7b47bcfbe67 name=/runtime.v1.RuntimeService/Version
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.063669791Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=09cc4a8c-e74a-478d-b4c0-357a45636fbb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.065051210Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431285065004251,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=09cc4a8c-e74a-478d-b4c0-357a45636fbb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.066507498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4401bb81-0cc4-4ab0-b8ef-fff99ae82880 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.066712686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4401bb81-0cc4-4ab0-b8ef-fff99ae82880 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 20:41:25 ha-689539 crio[658]: time="2024-12-05 20:41:25.068070601Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:77e0f8ba49070d29bec8e5d622dd7ab13e23f105aaab0de1a5a92c01e16ed731,PodSandboxId:2a35c5864db38de4db2df9661fc907cd58533506ed2900ff55721ee9ef7e8073,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1733431049357327660,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-7dff88458-qjqvr,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 30f51118-fa9b-418f-a3a5-02a74107c7de,},Annotations:map[string]string{io.kubernetes.container.hash: 82159afa,io.kubernetes.container.restartCount: 0,io.kubernetes.c
ontainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc,PodSandboxId:984c3b3f8fe032def0136810febfe8341f9285ab30c3ce2d6df35ec561964918,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910896086688,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-4ln9l,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f86a233b-c3f8-416b-ac76-f18dac2a1a2c,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"
UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02,PodSandboxId:d7a154f9d8020a9378296ea0b16287d3fd54fb83d94bd93df469f8808d3670fa,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733430910806734926,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.
kubernetes.pod.uid: e2a03e66-0718-48a3-9658-f70118ce6cae,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a,PodSandboxId:a344cd0e9a251c2b865c2838b5e161875e6d61340c124e5e6ddd88fdb8512dda,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733430910843663896,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6qhhf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ffff988-65
eb-4585-8ce4-de4df28c6b82,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61,PodSandboxId:faeac762b16891707c284f00eddfc16a831b7524637e5dbbc933c30cd8b2fe8c,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5,State:CO
NTAINER_RUNNING,CreatedAt:1733430899010755558,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-62qw6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9f0039aa-d5e2-49b9-adb4-ad93c96d22f0,},Annotations:map[string]string{io.kubernetes.container.hash: e48ee0f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df,PodSandboxId:6bc6d79587a62ca21788fe4de52bc6e9a4f3255de91b1f48365e7bc08408cac3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733430894
348055011,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9tslx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3d107dc4-2d8c-4e0d-aafc-5229161537df,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b,PodSandboxId:ae658c6069b4418ff55871310f01c6a0b5b0fe6e016403e3ff64bb02e0ac6a27,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:f1c87c24be687be21b68228b1b53b7792a3d82fb4d99bc68a5a984f582508a37,State:CONTAINER_RUNNING,CreatedAt:173343088582
7328958,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d8d33a00a36d98ae4f02477c2f0ef8f,},Annotations:map[string]string{io.kubernetes.container.hash: 69195791,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42,PodSandboxId:110f95e5235dfc7dbce02b5aa1a8191d469ee5d3abffc5bfebf7a11f52ae34be,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733430883266472620,Labels:map[string]string{io.kuberne
tes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: de3b0ba2fc46021faad87f06edada7a2,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668,PodSandboxId:a6058ddd3ee58967eb32bd94a306e465b678afcb374ea3f93649506453556476,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733430883263419187,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kub
ernetes.pod.name: kube-scheduler-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 35d9de31551106f5b54c143b52a0ba8b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19,PodSandboxId:f650305b876ca41a574dc76685713fd76500b7b3c5f17dbc66cdcd85cde99e34,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733430883237990702,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kub
e-apiserver-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91307b238b7c07f706a4534ff984ab88,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2,PodSandboxId:6d5d1a132984432f53f03c63a07dbd8083fa259a41160af40e8f0202f47d21ad,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733430883178338000,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-c
ontroller-manager-ha-689539,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1cf9467cd4c8887ece77367c75de1e85,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4401bb81-0cc4-4ab0-b8ef-fff99ae82880 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	77e0f8ba49070       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   2a35c5864db38       busybox-7dff88458-qjqvr
	05a6cfcd7e9ee       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   984c3b3f8fe03       coredns-7c65d6cfc9-4ln9l
	c6007ba446b77       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      6 minutes ago       Running             coredns                   0                   a344cd0e9a251       coredns-7c65d6cfc9-6qhhf
	74e8c78df0a6d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   d7a154f9d8020       storage-provisioner
	0809642e9449b       docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16    6 minutes ago       Running             kindnet-cni               0                   faeac762b1689       kindnet-62qw6
	0a16a5003f863       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      6 minutes ago       Running             kube-proxy                0                   6bc6d79587a62       kube-proxy-9tslx
	4431afbd69d99       ghcr.io/kube-vip/kube-vip@sha256:32829cc6f8630eba4e1b5e4df5bcbc34c767e70703d26e64a0f7317951c7b517     6 minutes ago       Running             kube-vip                  0                   ae658c6069b44       kube-vip-ha-689539
	1e9238618cdfe       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      6 minutes ago       Running             etcd                      0                   110f95e5235df       etcd-ha-689539
	2033f56968a9f       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      6 minutes ago       Running             kube-scheduler            0                   a6058ddd3ee58       kube-scheduler-ha-689539
	cd2211f15ae3c       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      6 minutes ago       Running             kube-apiserver            0                   f650305b876ca       kube-apiserver-ha-689539
	4a056592a0f93       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      6 minutes ago       Running             kube-controller-manager   0                   6d5d1a1329844       kube-controller-manager-ha-689539
	
	
	==> coredns [05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc] <==
	[INFO] 10.244.0.4:44188 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002182194s
	[INFO] 10.244.1.2:41292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000169551s
	[INFO] 10.244.1.2:38453 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003584311s
	[INFO] 10.244.1.2:36084 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000201777s
	[INFO] 10.244.1.2:49408 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000133503s
	[INFO] 10.244.2.2:51533 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000117849s
	[INFO] 10.244.2.2:34176 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00018539s
	[INFO] 10.244.2.2:43670 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000178861s
	[INFO] 10.244.2.2:56974 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000148401s
	[INFO] 10.244.0.4:48841 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000170335s
	[INFO] 10.244.0.4:43111 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001409238s
	[INFO] 10.244.0.4:36893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000093314s
	[INFO] 10.244.0.4:50555 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000104324s
	[INFO] 10.244.1.2:43568 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116735s
	[INFO] 10.244.1.2:44480 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000066571s
	[INFO] 10.244.1.2:60247 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000058674s
	[INFO] 10.244.2.2:49472 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000121084s
	[INFO] 10.244.0.4:57046 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000160079s
	[INFO] 10.244.0.4:44460 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119738s
	[INFO] 10.244.1.2:37203 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000178276s
	[INFO] 10.244.1.2:59196 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000213381s
	[INFO] 10.244.1.2:41969 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000159543s
	[INFO] 10.244.1.2:60294 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000120046s
	[INFO] 10.244.2.2:42519 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177647s
	[INFO] 10.244.0.4:60229 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000056377s
	
	
	==> coredns [c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a] <==
	[INFO] 10.244.0.4:55355 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000054352s
	[INFO] 10.244.1.2:33933 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161165s
	[INFO] 10.244.1.2:37174 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003884442s
	[INFO] 10.244.1.2:41634 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152882s
	[INFO] 10.244.1.2:60548 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000176047s
	[INFO] 10.244.2.2:32947 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000146675s
	[INFO] 10.244.2.2:60319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001949836s
	[INFO] 10.244.2.2:48727 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001337037s
	[INFO] 10.244.2.2:56733 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149582s
	[INFO] 10.244.0.4:58646 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001891441s
	[INFO] 10.244.0.4:55352 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000164932s
	[INFO] 10.244.0.4:54745 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000100872s
	[INFO] 10.244.0.4:51217 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000122097s
	[INFO] 10.244.1.2:52959 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000137256s
	[INFO] 10.244.2.2:52934 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000147111s
	[INFO] 10.244.2.2:34173 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000119001s
	[INFO] 10.244.2.2:41909 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000126707s
	[INFO] 10.244.0.4:46512 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120087s
	[INFO] 10.244.0.4:35647 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218624s
	[INFO] 10.244.2.2:51797 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000211308s
	[INFO] 10.244.2.2:38193 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207361s
	[INFO] 10.244.2.2:55117 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000135379s
	[INFO] 10.244.0.4:46265 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000114618s
	[INFO] 10.244.0.4:43082 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000145713s
	[INFO] 10.244.0.4:59763 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000071668s
	
	
	==> describe nodes <==
	Name:               ha-689539
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T20_34_50_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:34:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:34:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:53 +0000   Thu, 05 Dec 2024 20:35:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    ha-689539
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 3fcfe17cf29247c89ef6261408cdec57
	  System UUID:                3fcfe17c-f292-47c8-9ef6-261408cdec57
	  Boot ID:                    0967c504-1cf1-4d64-84b3-abc762e82552
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-qjqvr              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-7c65d6cfc9-4ln9l             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 coredns-7c65d6cfc9-6qhhf             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m32s
	  kube-system                 etcd-ha-689539                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m36s
	  kube-system                 kindnet-62qw6                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m32s
	  kube-system                 kube-apiserver-ha-689539             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-controller-manager-ha-689539    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-proxy-9tslx                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 kube-scheduler-ha-689539             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 kube-vip-ha-689539                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 6m30s  kube-proxy       
	  Normal  Starting                 6m36s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m36s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m36s  kubelet          Node ha-689539 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m36s  kubelet          Node ha-689539 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m36s  kubelet          Node ha-689539 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m32s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  NodeReady                6m15s  kubelet          Node ha-689539 status is now: NodeReady
	  Normal  RegisteredNode           5m32s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	  Normal  RegisteredNode           4m19s  node-controller  Node ha-689539 event: Registered Node ha-689539 in Controller
	
	
	Name:               ha-689539-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_35_47_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:35:45 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:38:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 05 Dec 2024 20:37:46 +0000   Thu, 05 Dec 2024 20:39:21 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.224
	  Hostname:    ha-689539-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 2527423e09b7455fb49f08b5007d8aaf
	  System UUID:                2527423e-09b7-455f-b49f-08b5007d8aaf
	  Boot ID:                    693fb661-afc0-4a4b-8d66-7434b8ba3be0
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-7ss94                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-ha-689539-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         5m39s
	  kube-system                 kindnet-b7bf2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      5m40s
	  kube-system                 kube-apiserver-ha-689539-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         5m39s
	  kube-system                 kube-controller-manager-ha-689539-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         5m31s
	  kube-system                 kube-proxy-x2grl                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-scheduler-ha-689539-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         5m34s
	  kube-system                 kube-vip-ha-689539-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m36s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  5m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  5m40s (x8 over 5m41s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m40s (x8 over 5m41s)  kubelet          Node ha-689539-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m40s (x7 over 5m41s)  kubelet          Node ha-689539-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5m37s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           5m32s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-689539-m02 event: Registered Node ha-689539-m02 in Controller
	  Normal  NodeNotReady             2m4s                   node-controller  Node ha-689539-m02 status is now: NodeNotReady
	
	
	Name:               ha-689539-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_37_02_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:36:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:36:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:37:59 +0000   Thu, 05 Dec 2024 20:37:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.133
	  Hostname:    ha-689539-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 23c133dbe3f244679269ca86c6b2111d
	  System UUID:                23c133db-e3f2-4467-9269-ca86c6b2111d
	  Boot ID:                    72ade07d-4013-4096-9862-81be930c4b6f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7dff88458-ns455                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-ha-689539-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m25s
	  kube-system                 kindnet-8kgs2                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m27s
	  kube-system                 kube-apiserver-ha-689539-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m25s
	  kube-system                 kube-controller-manager-ha-689539-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m17s
	  kube-system                 kube-proxy-dktwc                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m27s
	  kube-system                 kube-scheduler-ha-689539-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m24s
	  kube-system                 kube-vip-ha-689539-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m27s (x8 over 4m27s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m27s (x8 over 4m27s)  kubelet          Node ha-689539-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m27s (x7 over 4m27s)  kubelet          Node ha-689539-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m22s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	  Normal  RegisteredNode           4m19s                  node-controller  Node ha-689539-m03 event: Registered Node ha-689539-m03 in Controller
	
	
	Name:               ha-689539-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-689539-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=ha-689539
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_12_05T20_38_06_0700
	                    minikube.k8s.io/version=v1.34.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 20:38:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-689539-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 20:41:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 20:38:36 +0000   Thu, 05 Dec 2024 20:38:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.199
	  Hostname:    ha-689539-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 d82a84b2609b470c8ddc16781015ee6d
	  System UUID:                d82a84b2-609b-470c-8ddc-16781015ee6d
	  Boot ID:                    c6aff0b9-eb25-4035-add5-dcc47c5c8348
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-9xbpp       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      3m20s
	  kube-system                 kube-proxy-kpbrd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m15s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  3m20s (x2 over 3m21s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m20s (x2 over 3m21s)  kubelet          Node ha-689539-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m20s (x2 over 3m21s)  kubelet          Node ha-689539-m04 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m19s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  RegisteredNode           3m17s                  node-controller  Node ha-689539-m04 event: Registered Node ha-689539-m04 in Controller
	  Normal  NodeReady                3m                     kubelet          Node ha-689539-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec 5 20:34] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049641] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039465] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.885977] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.016771] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +4.614002] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.712547] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.063478] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.058841] systemd-fstab-generator[594]: Ignoring "noauto" option for root device
	[  +0.182620] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.134116] systemd-fstab-generator[620]: Ignoring "noauto" option for root device
	[  +0.286058] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +3.983127] systemd-fstab-generator[741]: Ignoring "noauto" option for root device
	[  +4.083666] systemd-fstab-generator[871]: Ignoring "noauto" option for root device
	[  +0.057216] kauditd_printk_skb: 158 callbacks suppressed
	[  +7.189676] systemd-fstab-generator[1290]: Ignoring "noauto" option for root device
	[  +0.088639] kauditd_printk_skb: 79 callbacks suppressed
	[  +5.119203] kauditd_printk_skb: 36 callbacks suppressed
	[  +5.279281] kauditd_printk_skb: 19 callbacks suppressed
	[Dec 5 20:35] kauditd_printk_skb: 24 callbacks suppressed
	
	
	==> etcd [1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42] <==
	{"level":"warn","ts":"2024-12-05T20:41:25.191353Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.354013Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.365904Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.371708Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.378515Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.382769Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.386415Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.391335Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.392218Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.397756Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.404190Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.410204Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.413975Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.421813Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.427615Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.433996Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.437998Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.441840Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.445997Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.447947Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.451985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.457649Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-12-05T20:41:25.463159Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"ad845b9614ec1023","rtt":"1.115526ms","error":"dial tcp 192.168.39.224:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-05T20:41:25.463451Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"ad845b9614ec1023","rtt":"9.225213ms","error":"dial tcp 192.168.39.224:2380: connect: no route to host"}
	{"level":"warn","ts":"2024-12-05T20:41:25.491410Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"9bf1b68912964415","from":"9bf1b68912964415","remote-peer-id":"ad845b9614ec1023","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 20:41:25 up 7 min,  0 users,  load average: 0.27, 0.25, 0.12
	Linux ha-689539 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61] <==
	I1205 20:40:49.972147       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972467       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:40:59.972574       1 main.go:301] handling current node
	I1205 20:40:59.972604       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:40:59.972621       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:40:59.972884       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:40:59.972920       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:40:59.973088       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:40:59.973124       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:41:09.973378       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:41:09.973428       1 main.go:301] handling current node
	I1205 20:41:09.973445       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:41:09.973450       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:41:09.973693       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:41:09.973706       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:41:09.973839       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:41:09.973846       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	I1205 20:41:19.979332       1 main.go:297] Handling node with IPs: map[192.168.39.220:{}]
	I1205 20:41:19.979398       1 main.go:301] handling current node
	I1205 20:41:19.979451       1 main.go:297] Handling node with IPs: map[192.168.39.224:{}]
	I1205 20:41:19.979461       1 main.go:324] Node ha-689539-m02 has CIDR [10.244.1.0/24] 
	I1205 20:41:19.979722       1 main.go:297] Handling node with IPs: map[192.168.39.133:{}]
	I1205 20:41:19.979750       1 main.go:324] Node ha-689539-m03 has CIDR [10.244.2.0/24] 
	I1205 20:41:19.979940       1 main.go:297] Handling node with IPs: map[192.168.39.199:{}]
	I1205 20:41:19.979973       1 main.go:324] Node ha-689539-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19] <==
	W1205 20:34:48.005731       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.220]
	I1205 20:34:48.006729       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 20:34:48.014987       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1205 20:34:48.223693       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 20:34:49.561495       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 20:34:49.580677       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1205 20:34:49.727059       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 20:34:53.679365       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1205 20:34:53.876376       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E1205 20:37:30.985923       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44596: use of closed network connection
	E1205 20:37:31.179622       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44600: use of closed network connection
	E1205 20:37:31.382888       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44610: use of closed network connection
	E1205 20:37:31.582068       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44622: use of closed network connection
	E1205 20:37:31.774198       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44652: use of closed network connection
	E1205 20:37:31.958030       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44666: use of closed network connection
	E1205 20:37:32.140428       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44686: use of closed network connection
	E1205 20:37:32.322775       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44704: use of closed network connection
	E1205 20:37:32.515908       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44718: use of closed network connection
	E1205 20:37:32.837161       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44756: use of closed network connection
	E1205 20:37:33.022723       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44776: use of closed network connection
	E1205 20:37:33.209590       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44790: use of closed network connection
	E1205 20:37:33.392904       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44808: use of closed network connection
	E1205 20:37:33.581589       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44830: use of closed network connection
	E1205 20:37:33.765728       1 conn.go:339] Error on socket receive: read tcp 192.168.39.254:8443->192.168.39.1:44852: use of closed network connection
	W1205 20:38:58.016885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.39.133 192.168.39.220]
	
	
	==> kube-controller-manager [4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2] <==
	I1205 20:38:05.497632       1 range_allocator.go:422] "Set node PodCIDR" logger="node-ipam-controller" node="ha-689539-m04" podCIDRs=["10.244.3.0/24"]
	I1205 20:38:05.497693       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.497786       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:05.524265       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.322551       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.681995       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:06.924972       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.069639       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.145190       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.229546       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:08.230026       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-689539-m04"
	I1205 20:38:08.272217       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:15.550194       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133022       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:25.133713       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:38:25.164347       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:26.915918       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:38:36.091312       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m04"
	I1205 20:39:21.941441       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.941592       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-689539-m04"
	I1205 20:39:21.962901       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:21.988464       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="15.390336ms"
	I1205 20:39:21.988772       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-7dff88458" duration="153.307µs"
	I1205 20:39:23.353917       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	I1205 20:39:27.137479       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="ha-689539-m02"
	
	
	==> kube-proxy [0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 20:34:54.543864       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 20:34:54.553756       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.220"]
	E1205 20:34:54.553891       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 20:34:54.586394       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 20:34:54.586517       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 20:34:54.586562       1 server_linux.go:169] "Using iptables Proxier"
	I1205 20:34:54.589547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 20:34:54.589875       1 server.go:483] "Version info" version="v1.31.2"
	I1205 20:34:54.589968       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 20:34:54.592476       1 config.go:199] "Starting service config controller"
	I1205 20:34:54.594797       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 20:34:54.592516       1 config.go:105] "Starting endpoint slice config controller"
	I1205 20:34:54.594853       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 20:34:54.600348       1 config.go:328] "Starting node config controller"
	I1205 20:34:54.601332       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 20:34:54.695425       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 20:34:54.695636       1 shared_informer.go:320] Caches are synced for service config
	I1205 20:34:54.701955       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668] <==
	E1205 20:34:47.293214       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.324868       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 20:34:47.324938       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.340705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 20:34:47.340848       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.360711       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 20:34:47.360829       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 20:34:47.402644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 20:34:47.402751       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.409130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 20:34:47.409228       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 20:34:47.580992       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 20:34:47.581091       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1205 20:34:49.941328       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E1205 20:37:26.487849       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.487974       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod c47c5104-83dc-428d-8ded-5175eff6643c(default/busybox-7dff88458-ns455) wasn't assumed so cannot be forgotten" pod="default/busybox-7dff88458-ns455"
	E1205 20:37:26.488011       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-ns455\": pod busybox-7dff88458-ns455 is already assigned to node \"ha-689539-m03\"" pod="default/busybox-7dff88458-ns455"
	I1205 20:37:26.488039       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-ns455" node="ha-689539-m03"
	E1205 20:37:26.529460       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" plugin="DefaultBinder" pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:37:26.531731       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7dff88458-qjqvr\": pod busybox-7dff88458-qjqvr is already assigned to node \"ha-689539\"" pod="default/busybox-7dff88458-qjqvr"
	I1205 20:37:26.532951       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7dff88458-qjqvr" node="ha-689539"
	E1205 20:38:05.558984       1 framework.go:1305] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	E1205 20:38:05.565872       1 schedule_one.go:348] "scheduler cache ForgetPod failed" err="pod 83d09bad-5a47-45ec-b467-0231a40ad9f0(kube-system/kindnet-mqzp5) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-mqzp5"
	E1205 20:38:05.566103       1 schedule_one.go:1057] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-mqzp5\": pod kindnet-mqzp5 is already assigned to node \"ha-689539-m04\"" pod="kube-system/kindnet-mqzp5"
	I1205 20:38:05.566218       1 schedule_one.go:1070] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-mqzp5" node="ha-689539-m04"
	
	
	==> kubelet <==
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801882    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:49 ha-689539 kubelet[1297]: E1205 20:39:49.801906    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431189801654914,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.803793    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:39:59 ha-689539 kubelet[1297]: E1205 20:39:59.804270    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431199803419655,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807394    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:09 ha-689539 kubelet[1297]: E1205 20:40:09.807450    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431209806841990,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811009    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:19 ha-689539 kubelet[1297]: E1205 20:40:19.811103    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431219810315680,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812356    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:29 ha-689539 kubelet[1297]: E1205 20:40:29.812422    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431229811933429,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814301    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:39 ha-689539 kubelet[1297]: E1205 20:40:39.814613    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431239813835089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.759293    1297 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 20:40:49 ha-689539 kubelet[1297]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 20:40:49 ha-689539 kubelet[1297]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816382    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:49 ha-689539 kubelet[1297]: E1205 20:40:49.816591    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431249816019108,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821073    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:40:59 ha-689539 kubelet[1297]: E1205 20:40:59.821410    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431259819028062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823458    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:09 ha-689539 kubelet[1297]: E1205 20:41:09.823549    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431269823063482,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:19 ha-689539 kubelet[1297]: E1205 20:41:19.829467    1297 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431279828726035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 20:41:19 ha-689539 kubelet[1297]: E1205 20:41:19.829492    1297 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733431279828726035,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:156106,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (6.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (382.82s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-689539 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-689539 -v=7 --alsologtostderr
E1205 20:41:49.076227  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:43:16.319973  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-689539 -v=7 --alsologtostderr: exit status 82 (2m1.930334082s)

                                                
                                                
-- stdout --
	* Stopping node "ha-689539-m04"  ...
	* Stopping node "ha-689539-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:41:26.590826  316092 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:41:26.590958  316092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:41:26.590968  316092 out.go:358] Setting ErrFile to fd 2...
	I1205 20:41:26.590972  316092 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:41:26.591194  316092 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:41:26.591499  316092 out.go:352] Setting JSON to false
	I1205 20:41:26.591612  316092 mustload.go:65] Loading cluster: ha-689539
	I1205 20:41:26.592184  316092 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:41:26.592318  316092 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:41:26.592569  316092 mustload.go:65] Loading cluster: ha-689539
	I1205 20:41:26.592882  316092 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:41:26.592964  316092 stop.go:39] StopHost: ha-689539-m04
	I1205 20:41:26.593604  316092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:41:26.593651  316092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:41:26.609680  316092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39317
	I1205 20:41:26.610368  316092 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:41:26.610975  316092 main.go:141] libmachine: Using API Version  1
	I1205 20:41:26.610998  316092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:41:26.611341  316092 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:41:26.613844  316092 out.go:177] * Stopping node "ha-689539-m04"  ...
	I1205 20:41:26.615331  316092 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:41:26.615366  316092 main.go:141] libmachine: (ha-689539-m04) Calling .DriverName
	I1205 20:41:26.615623  316092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:41:26.615667  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHHostname
	I1205 20:41:26.618856  316092 main.go:141] libmachine: (ha-689539-m04) DBG | domain ha-689539-m04 has defined MAC address 52:54:00:f0:2c:73 in network mk-ha-689539
	I1205 20:41:26.619415  316092 main.go:141] libmachine: (ha-689539-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:2c:73", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:37:48 +0000 UTC Type:0 Mac:52:54:00:f0:2c:73 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-689539-m04 Clientid:01:52:54:00:f0:2c:73}
	I1205 20:41:26.619459  316092 main.go:141] libmachine: (ha-689539-m04) DBG | domain ha-689539-m04 has defined IP address 192.168.39.199 and MAC address 52:54:00:f0:2c:73 in network mk-ha-689539
	I1205 20:41:26.619638  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHPort
	I1205 20:41:26.619838  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHKeyPath
	I1205 20:41:26.619979  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHUsername
	I1205 20:41:26.620131  316092 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m04/id_rsa Username:docker}
	I1205 20:41:26.705231  316092 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:41:26.758247  316092 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:41:26.811510  316092 main.go:141] libmachine: Stopping "ha-689539-m04"...
	I1205 20:41:26.811537  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetState
	I1205 20:41:26.813322  316092 main.go:141] libmachine: (ha-689539-m04) Calling .Stop
	I1205 20:41:26.816959  316092 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 0/120
	I1205 20:41:28.005895  316092 main.go:141] libmachine: (ha-689539-m04) Calling .GetState
	I1205 20:41:28.007646  316092 main.go:141] libmachine: Machine "ha-689539-m04" was stopped.
	I1205 20:41:28.007684  316092 stop.go:75] duration metric: took 1.392342104s to stop
	I1205 20:41:28.007722  316092 stop.go:39] StopHost: ha-689539-m03
	I1205 20:41:28.008032  316092 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:41:28.008075  316092 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:41:28.024827  316092 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36119
	I1205 20:41:28.025335  316092 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:41:28.025866  316092 main.go:141] libmachine: Using API Version  1
	I1205 20:41:28.025888  316092 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:41:28.026317  316092 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:41:28.029596  316092 out.go:177] * Stopping node "ha-689539-m03"  ...
	I1205 20:41:28.030972  316092 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:41:28.031008  316092 main.go:141] libmachine: (ha-689539-m03) Calling .DriverName
	I1205 20:41:28.031241  316092 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:41:28.031273  316092 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHHostname
	I1205 20:41:28.034348  316092 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:41:28.034872  316092 main.go:141] libmachine: (ha-689539-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:1e:d2", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:36:24 +0000 UTC Type:0 Mac:52:54:00:39:1e:d2 Iaid: IPaddr:192.168.39.133 Prefix:24 Hostname:ha-689539-m03 Clientid:01:52:54:00:39:1e:d2}
	I1205 20:41:28.034900  316092 main.go:141] libmachine: (ha-689539-m03) DBG | domain ha-689539-m03 has defined IP address 192.168.39.133 and MAC address 52:54:00:39:1e:d2 in network mk-ha-689539
	I1205 20:41:28.035085  316092 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHPort
	I1205 20:41:28.035293  316092 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHKeyPath
	I1205 20:41:28.035446  316092 main.go:141] libmachine: (ha-689539-m03) Calling .GetSSHUsername
	I1205 20:41:28.035626  316092 sshutil.go:53] new ssh client: &{IP:192.168.39.133 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m03/id_rsa Username:docker}
	I1205 20:41:28.123262  316092 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:41:28.179038  316092 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:41:28.233773  316092 main.go:141] libmachine: Stopping "ha-689539-m03"...
	I1205 20:41:28.233799  316092 main.go:141] libmachine: (ha-689539-m03) Calling .GetState
	I1205 20:41:28.235694  316092 main.go:141] libmachine: (ha-689539-m03) Calling .Stop
	I1205 20:41:28.239348  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 0/120
	I1205 20:41:29.240877  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 1/120
	I1205 20:41:30.242510  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 2/120
	I1205 20:41:31.244142  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 3/120
	I1205 20:41:32.245867  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 4/120
	I1205 20:41:33.248000  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 5/120
	I1205 20:41:34.249821  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 6/120
	I1205 20:41:35.251397  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 7/120
	I1205 20:41:36.252847  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 8/120
	I1205 20:41:37.254654  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 9/120
	I1205 20:41:38.257023  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 10/120
	I1205 20:41:39.258633  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 11/120
	I1205 20:41:40.260570  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 12/120
	I1205 20:41:41.262168  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 13/120
	I1205 20:41:42.264012  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 14/120
	I1205 20:41:43.266551  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 15/120
	I1205 20:41:44.268123  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 16/120
	I1205 20:41:45.270010  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 17/120
	I1205 20:41:46.271740  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 18/120
	I1205 20:41:47.273704  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 19/120
	I1205 20:41:48.275993  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 20/120
	I1205 20:41:49.277862  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 21/120
	I1205 20:41:50.279434  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 22/120
	I1205 20:41:51.281481  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 23/120
	I1205 20:41:52.283242  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 24/120
	I1205 20:41:53.285153  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 25/120
	I1205 20:41:54.286823  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 26/120
	I1205 20:41:55.288631  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 27/120
	I1205 20:41:56.290528  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 28/120
	I1205 20:41:57.292317  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 29/120
	I1205 20:41:58.294507  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 30/120
	I1205 20:41:59.296876  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 31/120
	I1205 20:42:00.298803  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 32/120
	I1205 20:42:01.300725  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 33/120
	I1205 20:42:02.302659  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 34/120
	I1205 20:42:03.304814  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 35/120
	I1205 20:42:04.306406  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 36/120
	I1205 20:42:05.308017  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 37/120
	I1205 20:42:06.309612  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 38/120
	I1205 20:42:07.311102  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 39/120
	I1205 20:42:08.313233  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 40/120
	I1205 20:42:09.314861  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 41/120
	I1205 20:42:10.316211  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 42/120
	I1205 20:42:11.318256  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 43/120
	I1205 20:42:12.319823  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 44/120
	I1205 20:42:13.321800  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 45/120
	I1205 20:42:14.323419  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 46/120
	I1205 20:42:15.324915  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 47/120
	I1205 20:42:16.326465  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 48/120
	I1205 20:42:17.327994  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 49/120
	I1205 20:42:18.330225  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 50/120
	I1205 20:42:19.331885  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 51/120
	I1205 20:42:20.333618  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 52/120
	I1205 20:42:21.335228  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 53/120
	I1205 20:42:22.337149  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 54/120
	I1205 20:42:23.339320  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 55/120
	I1205 20:42:24.340954  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 56/120
	I1205 20:42:25.342571  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 57/120
	I1205 20:42:26.344139  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 58/120
	I1205 20:42:27.345651  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 59/120
	I1205 20:42:28.348074  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 60/120
	I1205 20:42:29.349439  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 61/120
	I1205 20:42:30.351398  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 62/120
	I1205 20:42:31.353105  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 63/120
	I1205 20:42:32.355064  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 64/120
	I1205 20:42:33.356997  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 65/120
	I1205 20:42:34.358524  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 66/120
	I1205 20:42:35.360781  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 67/120
	I1205 20:42:36.362330  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 68/120
	I1205 20:42:37.364129  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 69/120
	I1205 20:42:38.366621  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 70/120
	I1205 20:42:39.368341  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 71/120
	I1205 20:42:40.369892  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 72/120
	I1205 20:42:41.371289  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 73/120
	I1205 20:42:42.373153  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 74/120
	I1205 20:42:43.375168  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 75/120
	I1205 20:42:44.376765  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 76/120
	I1205 20:42:45.378280  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 77/120
	I1205 20:42:46.380321  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 78/120
	I1205 20:42:47.381739  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 79/120
	I1205 20:42:48.383804  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 80/120
	I1205 20:42:49.385611  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 81/120
	I1205 20:42:50.387170  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 82/120
	I1205 20:42:51.388809  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 83/120
	I1205 20:42:52.390350  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 84/120
	I1205 20:42:53.392188  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 85/120
	I1205 20:42:54.393871  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 86/120
	I1205 20:42:55.395523  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 87/120
	I1205 20:42:56.396940  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 88/120
	I1205 20:42:57.398652  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 89/120
	I1205 20:42:58.400979  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 90/120
	I1205 20:42:59.402704  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 91/120
	I1205 20:43:00.404489  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 92/120
	I1205 20:43:01.406983  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 93/120
	I1205 20:43:02.408645  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 94/120
	I1205 20:43:03.410542  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 95/120
	I1205 20:43:04.412928  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 96/120
	I1205 20:43:05.414436  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 97/120
	I1205 20:43:06.416448  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 98/120
	I1205 20:43:07.418158  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 99/120
	I1205 20:43:08.420190  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 100/120
	I1205 20:43:09.421704  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 101/120
	I1205 20:43:10.423366  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 102/120
	I1205 20:43:11.425595  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 103/120
	I1205 20:43:12.427367  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 104/120
	I1205 20:43:13.429416  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 105/120
	I1205 20:43:14.431000  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 106/120
	I1205 20:43:15.432772  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 107/120
	I1205 20:43:16.434253  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 108/120
	I1205 20:43:17.435659  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 109/120
	I1205 20:43:18.437841  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 110/120
	I1205 20:43:19.439559  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 111/120
	I1205 20:43:20.441156  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 112/120
	I1205 20:43:21.442780  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 113/120
	I1205 20:43:22.444340  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 114/120
	I1205 20:43:23.446508  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 115/120
	I1205 20:43:24.448572  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 116/120
	I1205 20:43:25.450881  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 117/120
	I1205 20:43:26.452492  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 118/120
	I1205 20:43:27.454248  316092 main.go:141] libmachine: (ha-689539-m03) Waiting for machine to stop 119/120
	I1205 20:43:28.455504  316092 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:43:28.455590  316092 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:43:28.457637  316092 out.go:201] 
	W1205 20:43:28.459362  316092 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:43:28.459382  316092 out.go:270] * 
	* 
	W1205 20:43:28.462644  316092 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:43:28.464121  316092 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:466: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-689539 -v=7 --alsologtostderr" : exit status 82
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 start -p ha-689539 --wait=true -v=7 --alsologtostderr
E1205 20:43:44.028191  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:46:49.076833  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 start -p ha-689539 --wait=true -v=7 --alsologtostderr: (4m18.009768335s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-689539
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (2.096778361s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-689539 node start m02 -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-689539 -v=7                                                           | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-689539 -v=7                                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-689539 --wait=true -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:43 UTC | 05 Dec 24 20:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-689539                                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:47 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:43:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:43:28.523794  316560 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:43:28.523947  316560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:28.523959  316560 out.go:358] Setting ErrFile to fd 2...
	I1205 20:43:28.523963  316560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:28.524158  316560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:43:28.524768  316560 out.go:352] Setting JSON to false
	I1205 20:43:28.525801  316560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12357,"bootTime":1733419052,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:43:28.525955  316560 start.go:139] virtualization: kvm guest
	I1205 20:43:28.528439  316560 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:43:28.530496  316560 notify.go:220] Checking for updates...
	I1205 20:43:28.530515  316560 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:43:28.532524  316560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:43:28.534047  316560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:43:28.535378  316560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:43:28.536925  316560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:43:28.538410  316560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:43:28.540178  316560 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:43:28.540335  316560 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:43:28.540832  316560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:43:28.540883  316560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:43:28.557541  316560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I1205 20:43:28.558101  316560 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:43:28.558689  316560 main.go:141] libmachine: Using API Version  1
	I1205 20:43:28.558715  316560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:43:28.559124  316560 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:43:28.559381  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.599040  316560 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:43:28.600533  316560 start.go:297] selected driver: kvm2
	I1205 20:43:28.600554  316560 start.go:901] validating driver "kvm2" against &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:43:28.600714  316560 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:43:28.601122  316560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:43:28.601202  316560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:43:28.619281  316560 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:43:28.620068  316560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:43:28.620107  316560 cni.go:84] Creating CNI manager for ""
	I1205 20:43:28.620167  316560 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 20:43:28.620244  316560 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:43:28.620386  316560 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:43:28.622534  316560 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:43:28.623961  316560 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:43:28.624018  316560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:43:28.624026  316560 cache.go:56] Caching tarball of preloaded images
	I1205 20:43:28.624125  316560 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:43:28.624137  316560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:43:28.624283  316560 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:43:28.624523  316560 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:43:28.624578  316560 start.go:364] duration metric: took 34.231µs to acquireMachinesLock for "ha-689539"
	I1205 20:43:28.624596  316560 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:43:28.624601  316560 fix.go:54] fixHost starting: 
	I1205 20:43:28.624908  316560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:43:28.624948  316560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:43:28.640788  316560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1205 20:43:28.641343  316560 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:43:28.641966  316560 main.go:141] libmachine: Using API Version  1
	I1205 20:43:28.641994  316560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:43:28.642423  316560 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:43:28.642626  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.642817  316560 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:43:28.644628  316560 fix.go:112] recreateIfNeeded on ha-689539: state=Running err=<nil>
	W1205 20:43:28.644651  316560 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:43:28.646620  316560 out.go:177] * Updating the running kvm2 "ha-689539" VM ...
	I1205 20:43:28.647805  316560 machine.go:93] provisionDockerMachine start ...
	I1205 20:43:28.647834  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.648067  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.651145  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.651728  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.651759  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.651979  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.652175  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.652396  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.652508  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.652681  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.652947  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.652969  316560 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:43:28.755823  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:43:28.755865  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.756234  316560 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:43:28.756271  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.756463  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.759911  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.760394  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.760436  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.760636  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.760880  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.761052  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.761180  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.761402  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.761627  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.761642  316560 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:43:28.885267  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:43:28.885308  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.888436  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.888840  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.888880  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.889024  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.889270  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.889481  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.889644  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.889818  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.890058  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.890079  316560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:43:28.994930  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:43:28.994973  316560 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:43:28.995000  316560 buildroot.go:174] setting up certificates
	I1205 20:43:28.995021  316560 provision.go:84] configureAuth start
	I1205 20:43:28.995033  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.995445  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:43:28.998331  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.998834  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.998863  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.999100  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.001749  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.002150  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.002182  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.002374  316560 provision.go:143] copyHostCerts
	I1205 20:43:29.002413  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:43:29.002466  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:43:29.002494  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:43:29.002590  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:43:29.002694  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:43:29.002722  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:43:29.002732  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:43:29.002772  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:43:29.002838  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:43:29.002862  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:43:29.002872  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:43:29.002907  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:43:29.002975  316560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:43:29.264180  316560 provision.go:177] copyRemoteCerts
	I1205 20:43:29.264846  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:43:29.264899  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.268215  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.268646  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.268681  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.268882  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:29.269123  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.269322  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:29.269457  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:43:29.349425  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:43:29.349538  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:43:29.376403  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:43:29.376522  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:43:29.403327  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:43:29.403413  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:43:29.433179  316560 provision.go:87] duration metric: took 438.138747ms to configureAuth
	I1205 20:43:29.433217  316560 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:43:29.433483  316560 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:43:29.433572  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.436452  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.436816  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.436849  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.436993  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:29.437202  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.437406  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.437566  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:29.437705  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:29.437876  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:29.437891  316560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:45:00.170304  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:45:00.170343  316560 machine.go:96] duration metric: took 1m31.522517617s to provisionDockerMachine
	I1205 20:45:00.170369  316560 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:45:00.170385  316560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:45:00.170417  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.170858  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:45:00.170892  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.174571  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.175229  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.175293  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.175460  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.175731  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.175934  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.176081  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.257286  316560 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:45:00.261856  316560 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:45:00.261897  316560 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:45:00.262001  316560 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:45:00.262101  316560 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:45:00.262115  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:45:00.262234  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:45:00.272307  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:45:00.297513  316560 start.go:296] duration metric: took 127.124371ms for postStartSetup
	I1205 20:45:00.297582  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.297948  316560 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 20:45:00.297986  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.300906  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.301353  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.301394  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.301684  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.301925  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.302092  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.302225  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	W1205 20:45:00.381252  316560 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 20:45:00.381290  316560 fix.go:56] duration metric: took 1m31.756689874s for fixHost
	I1205 20:45:00.381317  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.384395  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.384765  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.384793  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.385011  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.385242  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.385420  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.385626  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.385859  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:45:00.386092  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:45:00.386104  316560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:45:00.486788  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733431500.450055357
	
	I1205 20:45:00.486813  316560 fix.go:216] guest clock: 1733431500.450055357
	I1205 20:45:00.486821  316560 fix.go:229] Guest: 2024-12-05 20:45:00.450055357 +0000 UTC Remote: 2024-12-05 20:45:00.381299398 +0000 UTC m=+91.902891530 (delta=68.755959ms)
	I1205 20:45:00.486871  316560 fix.go:200] guest clock delta is within tolerance: 68.755959ms
	I1205 20:45:00.486883  316560 start.go:83] releasing machines lock for "ha-689539", held for 1m31.862293868s
	I1205 20:45:00.486910  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.487212  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:45:00.490359  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.490827  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.490865  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.491034  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491702  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491889  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491991  316560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:45:00.492045  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.492086  316560 ssh_runner.go:195] Run: cat /version.json
	I1205 20:45:00.492114  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.494807  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495090  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495239  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.495263  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495484  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.495553  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.495591  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495663  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.495756  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.495842  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.495917  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.496071  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.496109  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.496284  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.594274  316560 ssh_runner.go:195] Run: systemctl --version
	I1205 20:45:00.600566  316560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:45:00.759515  316560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:45:00.766516  316560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:45:00.766611  316560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:45:00.776193  316560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:45:00.776225  316560 start.go:495] detecting cgroup driver to use...
	I1205 20:45:00.776320  316560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:45:00.793818  316560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:45:00.808483  316560 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:45:00.808563  316560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:45:00.823010  316560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:45:00.837241  316560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:45:01.016966  316560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:45:01.214570  316560 docker.go:233] disabling docker service ...
	I1205 20:45:01.214758  316560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:45:01.236929  316560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:45:01.253682  316560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:45:01.413812  316560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:45:01.571419  316560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:45:01.585719  316560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:45:01.606428  316560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:45:01.606534  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.617331  316560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:45:01.617406  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.629074  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.639985  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.651233  316560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:45:01.662924  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.674287  316560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.687445  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.698500  316560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:45:01.708448  316560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:45:01.718686  316560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:45:01.862941  316560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:45:10.134420  316560 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.271417878s)
	I1205 20:45:10.134453  316560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:45:10.134515  316560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:45:10.139504  316560 start.go:563] Will wait 60s for crictl version
	I1205 20:45:10.139573  316560 ssh_runner.go:195] Run: which crictl
	I1205 20:45:10.143374  316560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:45:10.180433  316560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:45:10.180520  316560 ssh_runner.go:195] Run: crio --version
	I1205 20:45:10.212224  316560 ssh_runner.go:195] Run: crio --version
	I1205 20:45:10.241852  316560 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:45:10.243112  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:45:10.246166  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:10.246566  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:10.246597  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:10.246869  316560 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:45:10.251642  316560 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:45:10.251807  316560 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:45:10.251866  316560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:45:10.295096  316560 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:45:10.295125  316560 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:45:10.295205  316560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:45:10.335284  316560 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:45:10.335349  316560 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:45:10.335362  316560 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:45:10.335502  316560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:45:10.335600  316560 ssh_runner.go:195] Run: crio config
	I1205 20:45:10.382323  316560 cni.go:84] Creating CNI manager for ""
	I1205 20:45:10.382347  316560 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 20:45:10.382359  316560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:45:10.382389  316560 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:45:10.382563  316560 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:45:10.382587  316560 kube-vip.go:115] generating kube-vip config ...
	I1205 20:45:10.382645  316560 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:45:10.393873  316560 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:45:10.394042  316560 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:45:10.394114  316560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:45:10.403457  316560 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:45:10.403564  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:45:10.413037  316560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:45:10.430336  316560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:45:10.447705  316560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:45:10.465868  316560 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:45:10.484140  316560 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:45:10.488304  316560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:45:10.633501  316560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:45:10.649266  316560 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:45:10.649312  316560 certs.go:194] generating shared ca certs ...
	I1205 20:45:10.649331  316560 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.649534  316560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:45:10.649598  316560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:45:10.649612  316560 certs.go:256] generating profile certs ...
	I1205 20:45:10.649768  316560 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:45:10.649803  316560 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba
	I1205 20:45:10.649828  316560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:45:10.763663  316560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba ...
	I1205 20:45:10.763706  316560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba: {Name:mkc044097ef7c863a0e42dc9a837bbd35af8a486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.763941  316560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba ...
	I1205 20:45:10.763961  316560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba: {Name:mk3746e26aad8b48ffbe0150e638db3a5a6e8a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.764086  316560 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:45:10.764328  316560 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:45:10.764533  316560 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:45:10.764556  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:45:10.764574  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:45:10.764594  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:45:10.764613  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:45:10.764633  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:45:10.764662  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:45:10.764682  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:45:10.764701  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:45:10.764789  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:45:10.764832  316560 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:45:10.764851  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:45:10.764882  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:45:10.764911  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:45:10.764951  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:45:10.765003  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:45:10.765045  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:10.765065  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:45:10.765084  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:45:10.765718  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:45:10.790740  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:45:10.814482  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:45:10.850359  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:45:10.874978  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:45:10.899151  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:45:10.923275  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:45:10.948572  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:45:10.973663  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:45:10.997911  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:45:11.022242  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:45:11.046595  316560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:45:11.063574  316560 ssh_runner.go:195] Run: openssl version
	I1205 20:45:11.069207  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:45:11.081182  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.085863  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.085966  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.091817  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:45:11.101873  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:45:11.112985  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.117453  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.117532  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.123549  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:45:11.132941  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:45:11.143755  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.148246  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.148422  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.154150  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:45:11.163942  316560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:45:11.168662  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:45:11.174316  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:45:11.180418  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:45:11.186356  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:45:11.192525  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:45:11.198242  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:45:11.203966  316560 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:45:11.204118  316560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:45:11.204174  316560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:45:11.241430  316560 cri.go:89] found id: "1ca90d340e6c4d9ec4453e42a32b557f6511e51ac13fafb85d5926148704071b"
	I1205 20:45:11.241470  316560 cri.go:89] found id: "74e9ebfe479c06a375e9c7eda349ff0325b34ec8e2d833ce3f510d778bcd7d19"
	I1205 20:45:11.241477  316560 cri.go:89] found id: "914645f157711d840a8087de6557776db2abf8d87659cd542db3913d13f0522e"
	I1205 20:45:11.241489  316560 cri.go:89] found id: "05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc"
	I1205 20:45:11.241494  316560 cri.go:89] found id: "c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a"
	I1205 20:45:11.241500  316560 cri.go:89] found id: "74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02"
	I1205 20:45:11.241507  316560 cri.go:89] found id: "0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61"
	I1205 20:45:11.241513  316560 cri.go:89] found id: "0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df"
	I1205 20:45:11.241518  316560 cri.go:89] found id: "4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b"
	I1205 20:45:11.241540  316560 cri.go:89] found id: "1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42"
	I1205 20:45:11.241550  316560 cri.go:89] found id: "2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668"
	I1205 20:45:11.241563  316560 cri.go:89] found id: "cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19"
	I1205 20:45:11.241576  316560 cri.go:89] found id: "4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2"
	I1205 20:45:11.241587  316560 cri.go:89] found id: ""
	I1205 20:45:11.241648  316560 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (382.82s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (142s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 stop -v=7 --alsologtostderr
E1205 20:48:12.142489  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:48:16.319890  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-689539 stop -v=7 --alsologtostderr: exit status 82 (2m0.495135592s)

                                                
                                                
-- stdout --
	* Stopping node "ha-689539-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:48:06.834792  318346 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:48:06.834927  318346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:48:06.834936  318346 out.go:358] Setting ErrFile to fd 2...
	I1205 20:48:06.834940  318346 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:48:06.835134  318346 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:48:06.835382  318346 out.go:352] Setting JSON to false
	I1205 20:48:06.835466  318346 mustload.go:65] Loading cluster: ha-689539
	I1205 20:48:06.835930  318346 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:48:06.836027  318346 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:48:06.836211  318346 mustload.go:65] Loading cluster: ha-689539
	I1205 20:48:06.836345  318346 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:48:06.836371  318346 stop.go:39] StopHost: ha-689539-m04
	I1205 20:48:06.836786  318346 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:48:06.836833  318346 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:48:06.853611  318346 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32913
	I1205 20:48:06.854304  318346 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:48:06.855055  318346 main.go:141] libmachine: Using API Version  1
	I1205 20:48:06.855083  318346 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:48:06.855492  318346 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:48:06.858254  318346 out.go:177] * Stopping node "ha-689539-m04"  ...
	I1205 20:48:06.859631  318346 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 20:48:06.859671  318346 main.go:141] libmachine: (ha-689539-m04) Calling .DriverName
	I1205 20:48:06.859930  318346 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 20:48:06.859956  318346 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHHostname
	I1205 20:48:06.863372  318346 main.go:141] libmachine: (ha-689539-m04) DBG | domain ha-689539-m04 has defined MAC address 52:54:00:f0:2c:73 in network mk-ha-689539
	I1205 20:48:06.863942  318346 main.go:141] libmachine: (ha-689539-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f0:2c:73", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:47:34 +0000 UTC Type:0 Mac:52:54:00:f0:2c:73 Iaid: IPaddr:192.168.39.199 Prefix:24 Hostname:ha-689539-m04 Clientid:01:52:54:00:f0:2c:73}
	I1205 20:48:06.863964  318346 main.go:141] libmachine: (ha-689539-m04) DBG | domain ha-689539-m04 has defined IP address 192.168.39.199 and MAC address 52:54:00:f0:2c:73 in network mk-ha-689539
	I1205 20:48:06.864151  318346 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHPort
	I1205 20:48:06.864416  318346 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHKeyPath
	I1205 20:48:06.864620  318346 main.go:141] libmachine: (ha-689539-m04) Calling .GetSSHUsername
	I1205 20:48:06.864780  318346 sshutil.go:53] new ssh client: &{IP:192.168.39.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539-m04/id_rsa Username:docker}
	I1205 20:48:06.945352  318346 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 20:48:06.999578  318346 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 20:48:07.052737  318346 main.go:141] libmachine: Stopping "ha-689539-m04"...
	I1205 20:48:07.052774  318346 main.go:141] libmachine: (ha-689539-m04) Calling .GetState
	I1205 20:48:07.054429  318346 main.go:141] libmachine: (ha-689539-m04) Calling .Stop
	I1205 20:48:07.058111  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 0/120
	I1205 20:48:08.059457  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 1/120
	I1205 20:48:09.060921  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 2/120
	I1205 20:48:10.062558  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 3/120
	I1205 20:48:11.064813  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 4/120
	I1205 20:48:12.067022  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 5/120
	I1205 20:48:13.068717  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 6/120
	I1205 20:48:14.070027  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 7/120
	I1205 20:48:15.071690  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 8/120
	I1205 20:48:16.073181  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 9/120
	I1205 20:48:17.074952  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 10/120
	I1205 20:48:18.076522  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 11/120
	I1205 20:48:19.077865  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 12/120
	I1205 20:48:20.079307  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 13/120
	I1205 20:48:21.080825  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 14/120
	I1205 20:48:22.082865  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 15/120
	I1205 20:48:23.084233  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 16/120
	I1205 20:48:24.085648  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 17/120
	I1205 20:48:25.087121  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 18/120
	I1205 20:48:26.088473  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 19/120
	I1205 20:48:27.090365  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 20/120
	I1205 20:48:28.092549  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 21/120
	I1205 20:48:29.094020  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 22/120
	I1205 20:48:30.095568  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 23/120
	I1205 20:48:31.096960  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 24/120
	I1205 20:48:32.098560  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 25/120
	I1205 20:48:33.100489  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 26/120
	I1205 20:48:34.102167  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 27/120
	I1205 20:48:35.103600  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 28/120
	I1205 20:48:36.105188  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 29/120
	I1205 20:48:37.107461  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 30/120
	I1205 20:48:38.109643  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 31/120
	I1205 20:48:39.111132  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 32/120
	I1205 20:48:40.112500  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 33/120
	I1205 20:48:41.114232  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 34/120
	I1205 20:48:42.116299  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 35/120
	I1205 20:48:43.117795  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 36/120
	I1205 20:48:44.119385  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 37/120
	I1205 20:48:45.120863  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 38/120
	I1205 20:48:46.122388  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 39/120
	I1205 20:48:47.124762  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 40/120
	I1205 20:48:48.126538  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 41/120
	I1205 20:48:49.128533  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 42/120
	I1205 20:48:50.129984  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 43/120
	I1205 20:48:51.131325  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 44/120
	I1205 20:48:52.133508  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 45/120
	I1205 20:48:53.135248  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 46/120
	I1205 20:48:54.136843  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 47/120
	I1205 20:48:55.138791  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 48/120
	I1205 20:48:56.140712  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 49/120
	I1205 20:48:57.142265  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 50/120
	I1205 20:48:58.144603  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 51/120
	I1205 20:48:59.146161  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 52/120
	I1205 20:49:00.148691  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 53/120
	I1205 20:49:01.150606  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 54/120
	I1205 20:49:02.152571  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 55/120
	I1205 20:49:03.154094  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 56/120
	I1205 20:49:04.155584  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 57/120
	I1205 20:49:05.157235  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 58/120
	I1205 20:49:06.158847  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 59/120
	I1205 20:49:07.160333  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 60/120
	I1205 20:49:08.162046  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 61/120
	I1205 20:49:09.163566  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 62/120
	I1205 20:49:10.165004  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 63/120
	I1205 20:49:11.166739  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 64/120
	I1205 20:49:12.169318  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 65/120
	I1205 20:49:13.170842  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 66/120
	I1205 20:49:14.172577  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 67/120
	I1205 20:49:15.174350  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 68/120
	I1205 20:49:16.175741  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 69/120
	I1205 20:49:17.176999  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 70/120
	I1205 20:49:18.178535  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 71/120
	I1205 20:49:19.180424  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 72/120
	I1205 20:49:20.181988  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 73/120
	I1205 20:49:21.183599  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 74/120
	I1205 20:49:22.185521  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 75/120
	I1205 20:49:23.187044  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 76/120
	I1205 20:49:24.188423  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 77/120
	I1205 20:49:25.189800  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 78/120
	I1205 20:49:26.191475  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 79/120
	I1205 20:49:27.194046  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 80/120
	I1205 20:49:28.195555  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 81/120
	I1205 20:49:29.197284  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 82/120
	I1205 20:49:30.199486  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 83/120
	I1205 20:49:31.201102  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 84/120
	I1205 20:49:32.203560  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 85/120
	I1205 20:49:33.204901  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 86/120
	I1205 20:49:34.206806  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 87/120
	I1205 20:49:35.208160  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 88/120
	I1205 20:49:36.209769  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 89/120
	I1205 20:49:37.211244  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 90/120
	I1205 20:49:38.212834  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 91/120
	I1205 20:49:39.214443  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 92/120
	I1205 20:49:40.216236  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 93/120
	I1205 20:49:41.217983  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 94/120
	I1205 20:49:42.220237  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 95/120
	I1205 20:49:43.221779  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 96/120
	I1205 20:49:44.223580  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 97/120
	I1205 20:49:45.225070  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 98/120
	I1205 20:49:46.226510  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 99/120
	I1205 20:49:47.228739  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 100/120
	I1205 20:49:48.230531  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 101/120
	I1205 20:49:49.232088  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 102/120
	I1205 20:49:50.233693  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 103/120
	I1205 20:49:51.235215  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 104/120
	I1205 20:49:52.237127  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 105/120
	I1205 20:49:53.238714  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 106/120
	I1205 20:49:54.240361  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 107/120
	I1205 20:49:55.241841  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 108/120
	I1205 20:49:56.243356  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 109/120
	I1205 20:49:57.245584  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 110/120
	I1205 20:49:58.247197  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 111/120
	I1205 20:49:59.248773  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 112/120
	I1205 20:50:00.250327  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 113/120
	I1205 20:50:01.252124  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 114/120
	I1205 20:50:02.254384  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 115/120
	I1205 20:50:03.256774  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 116/120
	I1205 20:50:04.258429  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 117/120
	I1205 20:50:05.260423  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 118/120
	I1205 20:50:06.261870  318346 main.go:141] libmachine: (ha-689539-m04) Waiting for machine to stop 119/120
	I1205 20:50:07.263034  318346 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 20:50:07.263132  318346 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 20:50:07.265008  318346 out.go:201] 
	W1205 20:50:07.266515  318346 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 20:50:07.266536  318346 out.go:270] * 
	* 
	W1205 20:50:07.269745  318346 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 20:50:07.271356  318346 out.go:201] 

                                                
                                                
** /stderr **
ha_test.go:535: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-689539 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr: (18.886820831s)
ha_test.go:545: status says not two control-plane nodes are present: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:551: status says not three kubelets are stopped: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
ha_test.go:554: status says not two apiservers are stopped: args "out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-689539 -n ha-689539
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 logs -n 25: (1.992131183s)
helpers_test.go:252: TestMultiControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m04 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp testdata/cp-test.txt                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt                       |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539 sudo cat                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539.txt                                 |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m02 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n                                                                 | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | ha-689539-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-689539 ssh -n ha-689539-m03 sudo cat                                          | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC | 05 Dec 24 20:38 UTC |
	|         | /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-689539 node stop m02 -v=7                                                     | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:38 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-689539 node start m02 -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-689539 -v=7                                                           | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-689539 -v=7                                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:41 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-689539 --wait=true -v=7                                                    | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:43 UTC | 05 Dec 24 20:47 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-689539                                                                | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:47 UTC |                     |
	| node    | ha-689539 node delete m03 -v=7                                                   | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:47 UTC | 05 Dec 24 20:48 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-689539 stop -v=7                                                              | ha-689539 | jenkins | v1.34.0 | 05 Dec 24 20:48 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:43:28
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:43:28.523794  316560 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:43:28.523947  316560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:28.523959  316560 out.go:358] Setting ErrFile to fd 2...
	I1205 20:43:28.523963  316560 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:43:28.524158  316560 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:43:28.524768  316560 out.go:352] Setting JSON to false
	I1205 20:43:28.525801  316560 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":12357,"bootTime":1733419052,"procs":184,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:43:28.525955  316560 start.go:139] virtualization: kvm guest
	I1205 20:43:28.528439  316560 out.go:177] * [ha-689539] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:43:28.530496  316560 notify.go:220] Checking for updates...
	I1205 20:43:28.530515  316560 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:43:28.532524  316560 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:43:28.534047  316560 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:43:28.535378  316560 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:43:28.536925  316560 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:43:28.538410  316560 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:43:28.540178  316560 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:43:28.540335  316560 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:43:28.540832  316560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:43:28.540883  316560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:43:28.557541  316560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37549
	I1205 20:43:28.558101  316560 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:43:28.558689  316560 main.go:141] libmachine: Using API Version  1
	I1205 20:43:28.558715  316560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:43:28.559124  316560 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:43:28.559381  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.599040  316560 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:43:28.600533  316560 start.go:297] selected driver: kvm2
	I1205 20:43:28.600554  316560 start.go:901] validating driver "kvm2" against &{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false
default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:43:28.600714  316560 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:43:28.601122  316560 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:43:28.601202  316560 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:43:28.619281  316560 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:43:28.620068  316560 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 20:43:28.620107  316560 cni.go:84] Creating CNI manager for ""
	I1205 20:43:28.620167  316560 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 20:43:28.620244  316560 start.go:340] cluster config:
	{Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:
false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:43:28.620386  316560 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:43:28.622534  316560 out.go:177] * Starting "ha-689539" primary control-plane node in "ha-689539" cluster
	I1205 20:43:28.623961  316560 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:43:28.624018  316560 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:43:28.624026  316560 cache.go:56] Caching tarball of preloaded images
	I1205 20:43:28.624125  316560 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 20:43:28.624137  316560 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:43:28.624283  316560 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/config.json ...
	I1205 20:43:28.624523  316560 start.go:360] acquireMachinesLock for ha-689539: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 20:43:28.624578  316560 start.go:364] duration metric: took 34.231µs to acquireMachinesLock for "ha-689539"
	I1205 20:43:28.624596  316560 start.go:96] Skipping create...Using existing machine configuration
	I1205 20:43:28.624601  316560 fix.go:54] fixHost starting: 
	I1205 20:43:28.624908  316560 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:43:28.624948  316560 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:43:28.640788  316560 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41183
	I1205 20:43:28.641343  316560 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:43:28.641966  316560 main.go:141] libmachine: Using API Version  1
	I1205 20:43:28.641994  316560 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:43:28.642423  316560 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:43:28.642626  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.642817  316560 main.go:141] libmachine: (ha-689539) Calling .GetState
	I1205 20:43:28.644628  316560 fix.go:112] recreateIfNeeded on ha-689539: state=Running err=<nil>
	W1205 20:43:28.644651  316560 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 20:43:28.646620  316560 out.go:177] * Updating the running kvm2 "ha-689539" VM ...
	I1205 20:43:28.647805  316560 machine.go:93] provisionDockerMachine start ...
	I1205 20:43:28.647834  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:43:28.648067  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.651145  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.651728  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.651759  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.651979  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.652175  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.652396  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.652508  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.652681  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.652947  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.652969  316560 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 20:43:28.755823  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:43:28.755865  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.756234  316560 buildroot.go:166] provisioning hostname "ha-689539"
	I1205 20:43:28.756271  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.756463  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.759911  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.760394  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.760436  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.760636  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.760880  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.761052  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.761180  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.761402  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.761627  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.761642  316560 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-689539 && echo "ha-689539" | sudo tee /etc/hostname
	I1205 20:43:28.885267  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-689539
	
	I1205 20:43:28.885308  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:28.888436  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.888840  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.888880  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.889024  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:28.889270  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.889481  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:28.889644  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:28.889818  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:28.890058  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:28.890079  316560 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-689539' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-689539/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-689539' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 20:43:28.994930  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 20:43:28.994973  316560 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 20:43:28.995000  316560 buildroot.go:174] setting up certificates
	I1205 20:43:28.995021  316560 provision.go:84] configureAuth start
	I1205 20:43:28.995033  316560 main.go:141] libmachine: (ha-689539) Calling .GetMachineName
	I1205 20:43:28.995445  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:43:28.998331  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.998834  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:28.998863  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:28.999100  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.001749  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.002150  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.002182  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.002374  316560 provision.go:143] copyHostCerts
	I1205 20:43:29.002413  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:43:29.002466  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 20:43:29.002494  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 20:43:29.002590  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 20:43:29.002694  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:43:29.002722  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 20:43:29.002732  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 20:43:29.002772  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 20:43:29.002838  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:43:29.002862  316560 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 20:43:29.002872  316560 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 20:43:29.002907  316560 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 20:43:29.002975  316560 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.ha-689539 san=[127.0.0.1 192.168.39.220 ha-689539 localhost minikube]
	I1205 20:43:29.264180  316560 provision.go:177] copyRemoteCerts
	I1205 20:43:29.264846  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 20:43:29.264899  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.268215  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.268646  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.268681  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.268882  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:29.269123  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.269322  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:29.269457  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:43:29.349425  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 20:43:29.349538  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 20:43:29.376403  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 20:43:29.376522  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I1205 20:43:29.403327  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 20:43:29.403413  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 20:43:29.433179  316560 provision.go:87] duration metric: took 438.138747ms to configureAuth
	I1205 20:43:29.433217  316560 buildroot.go:189] setting minikube options for container-runtime
	I1205 20:43:29.433483  316560 config.go:182] Loaded profile config "ha-689539": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:43:29.433572  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:43:29.436452  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.436816  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:43:29.436849  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:43:29.436993  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:43:29.437202  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.437406  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:43:29.437566  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:43:29.437705  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:43:29.437876  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:43:29.437891  316560 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 20:45:00.170304  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 20:45:00.170343  316560 machine.go:96] duration metric: took 1m31.522517617s to provisionDockerMachine
	I1205 20:45:00.170369  316560 start.go:293] postStartSetup for "ha-689539" (driver="kvm2")
	I1205 20:45:00.170385  316560 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 20:45:00.170417  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.170858  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 20:45:00.170892  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.174571  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.175229  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.175293  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.175460  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.175731  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.175934  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.176081  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.257286  316560 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 20:45:00.261856  316560 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 20:45:00.261897  316560 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 20:45:00.262001  316560 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 20:45:00.262101  316560 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 20:45:00.262115  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 20:45:00.262234  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 20:45:00.272307  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:45:00.297513  316560 start.go:296] duration metric: took 127.124371ms for postStartSetup
	I1205 20:45:00.297582  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.297948  316560 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I1205 20:45:00.297986  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.300906  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.301353  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.301394  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.301684  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.301925  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.302092  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.302225  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	W1205 20:45:00.381252  316560 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I1205 20:45:00.381290  316560 fix.go:56] duration metric: took 1m31.756689874s for fixHost
	I1205 20:45:00.381317  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.384395  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.384765  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.384793  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.385011  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.385242  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.385420  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.385626  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.385859  316560 main.go:141] libmachine: Using SSH client type: native
	I1205 20:45:00.386092  316560 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I1205 20:45:00.386104  316560 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 20:45:00.486788  316560 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733431500.450055357
	
	I1205 20:45:00.486813  316560 fix.go:216] guest clock: 1733431500.450055357
	I1205 20:45:00.486821  316560 fix.go:229] Guest: 2024-12-05 20:45:00.450055357 +0000 UTC Remote: 2024-12-05 20:45:00.381299398 +0000 UTC m=+91.902891530 (delta=68.755959ms)
	I1205 20:45:00.486871  316560 fix.go:200] guest clock delta is within tolerance: 68.755959ms
	I1205 20:45:00.486883  316560 start.go:83] releasing machines lock for "ha-689539", held for 1m31.862293868s
	I1205 20:45:00.486910  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.487212  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:45:00.490359  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.490827  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.490865  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.491034  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491702  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491889  316560 main.go:141] libmachine: (ha-689539) Calling .DriverName
	I1205 20:45:00.491991  316560 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 20:45:00.492045  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.492086  316560 ssh_runner.go:195] Run: cat /version.json
	I1205 20:45:00.492114  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHHostname
	I1205 20:45:00.494807  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495090  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495239  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.495263  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495484  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.495553  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:00.495591  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:00.495663  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.495756  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHPort
	I1205 20:45:00.495842  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.495917  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHKeyPath
	I1205 20:45:00.496071  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.496109  316560 main.go:141] libmachine: (ha-689539) Calling .GetSSHUsername
	I1205 20:45:00.496284  316560 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/ha-689539/id_rsa Username:docker}
	I1205 20:45:00.594274  316560 ssh_runner.go:195] Run: systemctl --version
	I1205 20:45:00.600566  316560 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 20:45:00.759515  316560 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 20:45:00.766516  316560 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 20:45:00.766611  316560 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 20:45:00.776193  316560 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 20:45:00.776225  316560 start.go:495] detecting cgroup driver to use...
	I1205 20:45:00.776320  316560 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 20:45:00.793818  316560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 20:45:00.808483  316560 docker.go:217] disabling cri-docker service (if available) ...
	I1205 20:45:00.808563  316560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 20:45:00.823010  316560 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 20:45:00.837241  316560 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 20:45:01.016966  316560 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 20:45:01.214570  316560 docker.go:233] disabling docker service ...
	I1205 20:45:01.214758  316560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 20:45:01.236929  316560 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 20:45:01.253682  316560 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 20:45:01.413812  316560 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 20:45:01.571419  316560 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 20:45:01.585719  316560 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 20:45:01.606428  316560 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 20:45:01.606534  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.617331  316560 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 20:45:01.617406  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.629074  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.639985  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.651233  316560 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 20:45:01.662924  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.674287  316560 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.687445  316560 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 20:45:01.698500  316560 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 20:45:01.708448  316560 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 20:45:01.718686  316560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:45:01.862941  316560 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 20:45:10.134420  316560 ssh_runner.go:235] Completed: sudo systemctl restart crio: (8.271417878s)
	I1205 20:45:10.134453  316560 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 20:45:10.134515  316560 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 20:45:10.139504  316560 start.go:563] Will wait 60s for crictl version
	I1205 20:45:10.139573  316560 ssh_runner.go:195] Run: which crictl
	I1205 20:45:10.143374  316560 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 20:45:10.180433  316560 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 20:45:10.180520  316560 ssh_runner.go:195] Run: crio --version
	I1205 20:45:10.212224  316560 ssh_runner.go:195] Run: crio --version
	I1205 20:45:10.241852  316560 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 20:45:10.243112  316560 main.go:141] libmachine: (ha-689539) Calling .GetIP
	I1205 20:45:10.246166  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:10.246566  316560 main.go:141] libmachine: (ha-689539) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:92:19:fb", ip: ""} in network mk-ha-689539: {Iface:virbr1 ExpiryTime:2024-12-05 21:34:22 +0000 UTC Type:0 Mac:52:54:00:92:19:fb Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:ha-689539 Clientid:01:52:54:00:92:19:fb}
	I1205 20:45:10.246597  316560 main.go:141] libmachine: (ha-689539) DBG | domain ha-689539 has defined IP address 192.168.39.220 and MAC address 52:54:00:92:19:fb in network mk-ha-689539
	I1205 20:45:10.246869  316560 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 20:45:10.251642  316560 kubeadm.go:883] updating cluster {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-sto
rageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9
p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 20:45:10.251807  316560 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:45:10.251866  316560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:45:10.295096  316560 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:45:10.295125  316560 crio.go:433] Images already preloaded, skipping extraction
	I1205 20:45:10.295205  316560 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 20:45:10.335284  316560 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 20:45:10.335349  316560 cache_images.go:84] Images are preloaded, skipping loading
	I1205 20:45:10.335362  316560 kubeadm.go:934] updating node { 192.168.39.220 8443 v1.31.2 crio true true} ...
	I1205 20:45:10.335502  316560 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-689539 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 20:45:10.335600  316560 ssh_runner.go:195] Run: crio config
	I1205 20:45:10.382323  316560 cni.go:84] Creating CNI manager for ""
	I1205 20:45:10.382347  316560 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I1205 20:45:10.382359  316560 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 20:45:10.382389  316560 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-689539 NodeName:ha-689539 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.220"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 20:45:10.382563  316560 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-689539"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.220"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.220"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 20:45:10.382587  316560 kube-vip.go:115] generating kube-vip config ...
	I1205 20:45:10.382645  316560 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I1205 20:45:10.393873  316560 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I1205 20:45:10.394042  316560 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.7
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I1205 20:45:10.394114  316560 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 20:45:10.403457  316560 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 20:45:10.403564  316560 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I1205 20:45:10.413037  316560 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I1205 20:45:10.430336  316560 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 20:45:10.447705  316560 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2289 bytes)
	I1205 20:45:10.465868  316560 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I1205 20:45:10.484140  316560 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I1205 20:45:10.488304  316560 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 20:45:10.633501  316560 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 20:45:10.649266  316560 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539 for IP: 192.168.39.220
	I1205 20:45:10.649312  316560 certs.go:194] generating shared ca certs ...
	I1205 20:45:10.649331  316560 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.649534  316560 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 20:45:10.649598  316560 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 20:45:10.649612  316560 certs.go:256] generating profile certs ...
	I1205 20:45:10.649768  316560 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/client.key
	I1205 20:45:10.649803  316560 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba
	I1205 20:45:10.649828  316560 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.220 192.168.39.224 192.168.39.133 192.168.39.254]
	I1205 20:45:10.763663  316560 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba ...
	I1205 20:45:10.763706  316560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba: {Name:mkc044097ef7c863a0e42dc9a837bbd35af8a486 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.763941  316560 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba ...
	I1205 20:45:10.763961  316560 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba: {Name:mk3746e26aad8b48ffbe0150e638db3a5a6e8a99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:45:10.764086  316560 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt.8c848dba -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt
	I1205 20:45:10.764328  316560 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key.8c848dba -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key
	I1205 20:45:10.764533  316560 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key
	I1205 20:45:10.764556  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 20:45:10.764574  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 20:45:10.764594  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 20:45:10.764613  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 20:45:10.764633  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 20:45:10.764662  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 20:45:10.764682  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 20:45:10.764701  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 20:45:10.764789  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 20:45:10.764832  316560 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 20:45:10.764851  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 20:45:10.764882  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 20:45:10.764911  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 20:45:10.764951  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 20:45:10.765003  316560 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 20:45:10.765045  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:10.765065  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 20:45:10.765084  316560 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 20:45:10.765718  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 20:45:10.790740  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 20:45:10.814482  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 20:45:10.850359  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 20:45:10.874978  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 20:45:10.899151  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 20:45:10.923275  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 20:45:10.948572  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/ha-689539/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 20:45:10.973663  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 20:45:10.997911  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 20:45:11.022242  316560 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 20:45:11.046595  316560 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 20:45:11.063574  316560 ssh_runner.go:195] Run: openssl version
	I1205 20:45:11.069207  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 20:45:11.081182  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.085863  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.085966  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 20:45:11.091817  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 20:45:11.101873  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 20:45:11.112985  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.117453  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.117532  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 20:45:11.123549  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 20:45:11.132941  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 20:45:11.143755  316560 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.148246  316560 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.148422  316560 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 20:45:11.154150  316560 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 20:45:11.163942  316560 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 20:45:11.168662  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 20:45:11.174316  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 20:45:11.180418  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 20:45:11.186356  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 20:45:11.192525  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 20:45:11.198242  316560 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 20:45:11.203966  316560 kubeadm.go:392] StartCluster: {Name:ha-689539 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Clust
erName:ha-689539 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.224 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.133 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.199 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storag
eclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:45:11.204118  316560 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 20:45:11.204174  316560 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 20:45:11.241430  316560 cri.go:89] found id: "1ca90d340e6c4d9ec4453e42a32b557f6511e51ac13fafb85d5926148704071b"
	I1205 20:45:11.241470  316560 cri.go:89] found id: "74e9ebfe479c06a375e9c7eda349ff0325b34ec8e2d833ce3f510d778bcd7d19"
	I1205 20:45:11.241477  316560 cri.go:89] found id: "914645f157711d840a8087de6557776db2abf8d87659cd542db3913d13f0522e"
	I1205 20:45:11.241489  316560 cri.go:89] found id: "05a6cfcd7e9ee9f891eb88ed5524553b60ff7eea407bf604b4bb647571d2d6bc"
	I1205 20:45:11.241494  316560 cri.go:89] found id: "c6007ba446b77339e212a73381a8f38a179f608f925ff7a5ea1b5d82ae33932a"
	I1205 20:45:11.241500  316560 cri.go:89] found id: "74e8c78df0a6db76a9459104d39c6ef10926413179a5c4d311a9731258ab0f02"
	I1205 20:45:11.241507  316560 cri.go:89] found id: "0809642e9449ba90a816abcee2bd42a09b6b67ed76a448e2d048ccaf59218f61"
	I1205 20:45:11.241513  316560 cri.go:89] found id: "0a16a5003f86348e862f9550da656af6282a73ec42e356d33277fd89aaf930df"
	I1205 20:45:11.241518  316560 cri.go:89] found id: "4431afbd69d99866cc27a419bb59862ef9f5cfdbb9bd9cd5bc4fc820eba9a01b"
	I1205 20:45:11.241540  316560 cri.go:89] found id: "1e9238618cdfeaa8f5cda3dadc37d1e8157c6e781e2f421e149ea764a7138e42"
	I1205 20:45:11.241550  316560 cri.go:89] found id: "2033f56968a9f0c2e15d272c7bf15a278a24d2f148a58cc7194c50096b822668"
	I1205 20:45:11.241563  316560 cri.go:89] found id: "cd2211f15ae3cd3d69be3df3528c3d849f2b50615ce67935cdb97567a8a5fe19"
	I1205 20:45:11.241576  316560 cri.go:89] found id: "4a056592a0f933a18207a4ce68694f30ffbd9c7502e8e1a5717c67850246dfb2"
	I1205 20:45:11.241587  316560 cri.go:89] found id: ""
	I1205 20:45:11.241648  316560 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-689539 -n ha-689539
helpers_test.go:261: (dbg) Run:  kubectl --context ha-689539 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopCluster (142.00s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (334.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-784478
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-784478
E1205 21:06:49.077130  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-784478: exit status 82 (2m1.853618684s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-784478-m03"  ...
	* Stopping node "multinode-784478-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-784478" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-784478 --wait=true -v=8 --alsologtostderr
E1205 21:08:16.325637  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-784478 --wait=true -v=8 --alsologtostderr: (3m30.000216537s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-784478
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-784478 -n multinode-784478
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 logs -n 25: (2.156292862s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478:/home/docker/cp-test_multinode-784478-m02_multinode-784478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478 sudo cat                                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m02_multinode-784478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03:/home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478-m03 sudo cat                                   | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp testdata/cp-test.txt                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478:/home/docker/cp-test_multinode-784478-m03_multinode-784478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478 sudo cat                                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02:/home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478-m02 sudo cat                                   | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-784478 node stop m03                                                          | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	| node    | multinode-784478 node start                                                             | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| stop    | -p multinode-784478                                                                     | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| start   | -p multinode-784478                                                                     | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:07 UTC | 05 Dec 24 21:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:07:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:07:20.239334  328361 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:07:20.239472  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:07:20.239483  328361 out.go:358] Setting ErrFile to fd 2...
	I1205 21:07:20.239487  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:07:20.239662  328361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:07:20.240255  328361 out.go:352] Setting JSON to false
	I1205 21:07:20.241271  328361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13788,"bootTime":1733419052,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:07:20.241344  328361 start.go:139] virtualization: kvm guest
	I1205 21:07:20.243929  328361 out.go:177] * [multinode-784478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:07:20.245678  328361 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:07:20.245687  328361 notify.go:220] Checking for updates...
	I1205 21:07:20.248790  328361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:07:20.250329  328361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:07:20.251616  328361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:07:20.253206  328361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:07:20.254658  328361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:07:20.256585  328361 config.go:182] Loaded profile config "multinode-784478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:07:20.256740  328361 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:07:20.257439  328361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:07:20.257532  328361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:07:20.275213  328361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1205 21:07:20.275881  328361 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:07:20.276491  328361 main.go:141] libmachine: Using API Version  1
	I1205 21:07:20.276515  328361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:07:20.276984  328361 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:07:20.277213  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.317149  328361 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:07:20.318681  328361 start.go:297] selected driver: kvm2
	I1205 21:07:20.318708  328361 start.go:901] validating driver "kvm2" against &{Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:07:20.318880  328361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:07:20.319262  328361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:07:20.319373  328361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:07:20.336226  328361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:07:20.337071  328361 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:07:20.337110  328361 cni.go:84] Creating CNI manager for ""
	I1205 21:07:20.337148  328361 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 21:07:20.337216  328361 start.go:340] cluster config:
	{Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:07:20.337346  328361 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:07:20.339361  328361 out.go:177] * Starting "multinode-784478" primary control-plane node in "multinode-784478" cluster
	I1205 21:07:20.341173  328361 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:07:20.341231  328361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:07:20.341244  328361 cache.go:56] Caching tarball of preloaded images
	I1205 21:07:20.341411  328361 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:07:20.341429  328361 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:07:20.341574  328361 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/config.json ...
	I1205 21:07:20.341825  328361 start.go:360] acquireMachinesLock for multinode-784478: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:07:20.341889  328361 start.go:364] duration metric: took 40.294µs to acquireMachinesLock for "multinode-784478"
	I1205 21:07:20.341931  328361 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:07:20.341941  328361 fix.go:54] fixHost starting: 
	I1205 21:07:20.342247  328361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:07:20.342304  328361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:07:20.358047  328361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43267
	I1205 21:07:20.358705  328361 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:07:20.359295  328361 main.go:141] libmachine: Using API Version  1
	I1205 21:07:20.359323  328361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:07:20.359699  328361 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:07:20.359898  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.360044  328361 main.go:141] libmachine: (multinode-784478) Calling .GetState
	I1205 21:07:20.361794  328361 fix.go:112] recreateIfNeeded on multinode-784478: state=Running err=<nil>
	W1205 21:07:20.361821  328361 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:07:20.364072  328361 out.go:177] * Updating the running kvm2 "multinode-784478" VM ...
	I1205 21:07:20.365667  328361 machine.go:93] provisionDockerMachine start ...
	I1205 21:07:20.365690  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.365962  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.368832  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.369315  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.369357  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.369483  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.369707  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.369866  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.370000  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.370166  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.370428  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.370443  328361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:07:20.474439  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784478
	
	I1205 21:07:20.474472  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.474750  328361 buildroot.go:166] provisioning hostname "multinode-784478"
	I1205 21:07:20.474777  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.474971  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.478126  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.478549  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.478648  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.478906  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.479141  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.479320  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.479485  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.479682  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.479887  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.479904  328361 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-784478 && echo "multinode-784478" | sudo tee /etc/hostname
	I1205 21:07:20.597040  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784478
	
	I1205 21:07:20.597070  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.600395  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.600683  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.600710  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.600960  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.601235  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.601541  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.601762  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.601974  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.602188  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.602212  328361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-784478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-784478/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-784478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:07:20.703175  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:07:20.703218  328361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:07:20.703267  328361 buildroot.go:174] setting up certificates
	I1205 21:07:20.703286  328361 provision.go:84] configureAuth start
	I1205 21:07:20.703302  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.703688  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:07:20.707150  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.707577  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.707609  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.707788  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.710730  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.711165  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.711208  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.711355  328361 provision.go:143] copyHostCerts
	I1205 21:07:20.711390  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:07:20.711423  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:07:20.711441  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:07:20.711512  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:07:20.712072  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:07:20.712198  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:07:20.712217  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:07:20.712323  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:07:20.712441  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:07:20.712493  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:07:20.712510  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:07:20.712571  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:07:20.712683  328361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.multinode-784478 san=[127.0.0.1 192.168.39.221 localhost minikube multinode-784478]
	I1205 21:07:20.874659  328361 provision.go:177] copyRemoteCerts
	I1205 21:07:20.874730  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:07:20.874776  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.877768  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.878191  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.878230  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.878441  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.878683  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.878831  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.879030  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:07:20.960728  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 21:07:20.960827  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:07:20.986382  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 21:07:20.986475  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 21:07:21.010950  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 21:07:21.011041  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:07:21.040118  328361 provision.go:87] duration metric: took 336.813011ms to configureAuth
	I1205 21:07:21.040154  328361 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:07:21.040380  328361 config.go:182] Loaded profile config "multinode-784478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:07:21.040463  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:21.043437  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:21.043830  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:21.043867  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:21.044007  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:21.044240  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:21.044442  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:21.044638  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:21.044823  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:21.045052  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:21.045084  328361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:08:51.750999  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:08:51.751038  328361 machine.go:96] duration metric: took 1m31.385355738s to provisionDockerMachine
	I1205 21:08:51.751057  328361 start.go:293] postStartSetup for "multinode-784478" (driver="kvm2")
	I1205 21:08:51.751082  328361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:08:51.751115  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.751536  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:08:51.751567  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.755471  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.755922  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.755945  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.756171  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.756424  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.756621  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.756793  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:51.837314  328361 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:08:51.842037  328361 command_runner.go:130] > NAME=Buildroot
	I1205 21:08:51.842074  328361 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 21:08:51.842081  328361 command_runner.go:130] > ID=buildroot
	I1205 21:08:51.842087  328361 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 21:08:51.842092  328361 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 21:08:51.842368  328361 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:08:51.842407  328361 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:08:51.842494  328361 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:08:51.842587  328361 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:08:51.842599  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 21:08:51.842713  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:08:51.852894  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:08:51.878715  328361 start.go:296] duration metric: took 127.625908ms for postStartSetup
	I1205 21:08:51.878785  328361 fix.go:56] duration metric: took 1m31.536844462s for fixHost
	I1205 21:08:51.878826  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.881995  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.882389  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.882426  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.882655  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.882940  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.883147  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.883386  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.883560  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:08:51.883788  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:08:51.883802  328361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:08:51.983007  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733432931.960646715
	
	I1205 21:08:51.983034  328361 fix.go:216] guest clock: 1733432931.960646715
	I1205 21:08:51.983045  328361 fix.go:229] Guest: 2024-12-05 21:08:51.960646715 +0000 UTC Remote: 2024-12-05 21:08:51.878792101 +0000 UTC m=+91.682814268 (delta=81.854614ms)
	I1205 21:08:51.983085  328361 fix.go:200] guest clock delta is within tolerance: 81.854614ms
	I1205 21:08:51.983092  328361 start.go:83] releasing machines lock for "multinode-784478", held for 1m31.641172949s
	I1205 21:08:51.983115  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.983448  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:08:51.986634  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.987008  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.987036  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.987278  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.987880  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.988118  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.988209  328361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:08:51.988272  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.988421  328361 ssh_runner.go:195] Run: cat /version.json
	I1205 21:08:51.988445  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.991155  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991405  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991556  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.991594  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991743  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.991846  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.991871  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991930  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.992016  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.992103  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.992160  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.992225  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:51.992257  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.992387  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:52.092962  328361 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 21:08:52.093565  328361 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 21:08:52.093755  328361 ssh_runner.go:195] Run: systemctl --version
	I1205 21:08:52.099794  328361 command_runner.go:130] > systemd 252 (252)
	I1205 21:08:52.099857  328361 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 21:08:52.100010  328361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:08:52.250629  328361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 21:08:52.259587  328361 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 21:08:52.259653  328361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:08:52.259716  328361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:08:52.270297  328361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 21:08:52.270330  328361 start.go:495] detecting cgroup driver to use...
	I1205 21:08:52.270409  328361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:08:52.287589  328361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:08:52.302176  328361 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:08:52.302261  328361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:08:52.316296  328361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:08:52.330708  328361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:08:52.483558  328361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:08:52.689242  328361 docker.go:233] disabling docker service ...
	I1205 21:08:52.689343  328361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:08:52.714808  328361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:08:52.736730  328361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:08:52.938849  328361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:08:53.097860  328361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:08:53.112496  328361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:08:53.131756  328361 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 21:08:53.131822  328361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:08:53.131881  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.142932  328361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:08:53.143029  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.154022  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.164767  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.175656  328361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:08:53.186854  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.197394  328361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.208344  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.218921  328361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:08:53.229322  328361 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 21:08:53.229413  328361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:08:53.239421  328361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:08:53.379606  328361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:09:03.130509  328361 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.750854387s)
	I1205 21:09:03.130559  328361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:09:03.130624  328361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:09:03.135644  328361 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 21:09:03.135682  328361 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 21:09:03.135692  328361 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I1205 21:09:03.135701  328361 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 21:09:03.135706  328361 command_runner.go:130] > Access: 2024-12-05 21:09:02.992248644 +0000
	I1205 21:09:03.135722  328361 command_runner.go:130] > Modify: 2024-12-05 21:09:02.953246299 +0000
	I1205 21:09:03.135734  328361 command_runner.go:130] > Change: 2024-12-05 21:09:02.953246299 +0000
	I1205 21:09:03.135741  328361 command_runner.go:130] >  Birth: -
	I1205 21:09:03.135795  328361 start.go:563] Will wait 60s for crictl version
	I1205 21:09:03.135859  328361 ssh_runner.go:195] Run: which crictl
	I1205 21:09:03.139750  328361 command_runner.go:130] > /usr/bin/crictl
	I1205 21:09:03.139860  328361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:09:03.174786  328361 command_runner.go:130] > Version:  0.1.0
	I1205 21:09:03.174817  328361 command_runner.go:130] > RuntimeName:  cri-o
	I1205 21:09:03.174823  328361 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 21:09:03.174831  328361 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 21:09:03.175864  328361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:09:03.175976  328361 ssh_runner.go:195] Run: crio --version
	I1205 21:09:03.203090  328361 command_runner.go:130] > crio version 1.29.1
	I1205 21:09:03.203119  328361 command_runner.go:130] > Version:        1.29.1
	I1205 21:09:03.203127  328361 command_runner.go:130] > GitCommit:      unknown
	I1205 21:09:03.203134  328361 command_runner.go:130] > GitCommitDate:  unknown
	I1205 21:09:03.203140  328361 command_runner.go:130] > GitTreeState:   clean
	I1205 21:09:03.203148  328361 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 21:09:03.203154  328361 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 21:09:03.203169  328361 command_runner.go:130] > Compiler:       gc
	I1205 21:09:03.203175  328361 command_runner.go:130] > Platform:       linux/amd64
	I1205 21:09:03.203180  328361 command_runner.go:130] > Linkmode:       dynamic
	I1205 21:09:03.203187  328361 command_runner.go:130] > BuildTags:      
	I1205 21:09:03.203193  328361 command_runner.go:130] >   containers_image_ostree_stub
	I1205 21:09:03.203200  328361 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 21:09:03.203206  328361 command_runner.go:130] >   btrfs_noversion
	I1205 21:09:03.203214  328361 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 21:09:03.203225  328361 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 21:09:03.203234  328361 command_runner.go:130] >   seccomp
	I1205 21:09:03.203241  328361 command_runner.go:130] > LDFlags:          unknown
	I1205 21:09:03.203248  328361 command_runner.go:130] > SeccompEnabled:   true
	I1205 21:09:03.203255  328361 command_runner.go:130] > AppArmorEnabled:  false
	I1205 21:09:03.204418  328361 ssh_runner.go:195] Run: crio --version
	I1205 21:09:03.233671  328361 command_runner.go:130] > crio version 1.29.1
	I1205 21:09:03.233701  328361 command_runner.go:130] > Version:        1.29.1
	I1205 21:09:03.233707  328361 command_runner.go:130] > GitCommit:      unknown
	I1205 21:09:03.233711  328361 command_runner.go:130] > GitCommitDate:  unknown
	I1205 21:09:03.233716  328361 command_runner.go:130] > GitTreeState:   clean
	I1205 21:09:03.233722  328361 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 21:09:03.233726  328361 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 21:09:03.233731  328361 command_runner.go:130] > Compiler:       gc
	I1205 21:09:03.233739  328361 command_runner.go:130] > Platform:       linux/amd64
	I1205 21:09:03.233744  328361 command_runner.go:130] > Linkmode:       dynamic
	I1205 21:09:03.233751  328361 command_runner.go:130] > BuildTags:      
	I1205 21:09:03.233758  328361 command_runner.go:130] >   containers_image_ostree_stub
	I1205 21:09:03.233765  328361 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 21:09:03.233772  328361 command_runner.go:130] >   btrfs_noversion
	I1205 21:09:03.233779  328361 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 21:09:03.233790  328361 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 21:09:03.233796  328361 command_runner.go:130] >   seccomp
	I1205 21:09:03.233803  328361 command_runner.go:130] > LDFlags:          unknown
	I1205 21:09:03.233810  328361 command_runner.go:130] > SeccompEnabled:   true
	I1205 21:09:03.233820  328361 command_runner.go:130] > AppArmorEnabled:  false
	I1205 21:09:03.236923  328361 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:09:03.238479  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:09:03.241924  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:09:03.242193  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:09:03.242226  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:09:03.242523  328361 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:09:03.246805  328361 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 21:09:03.246924  328361 kubeadm.go:883] updating cluster {Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:09:03.247074  328361 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:09:03.247116  328361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:09:03.290202  328361 command_runner.go:130] > {
	I1205 21:09:03.290238  328361 command_runner.go:130] >   "images": [
	I1205 21:09:03.290244  328361 command_runner.go:130] >     {
	I1205 21:09:03.290256  328361 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 21:09:03.290264  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290274  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 21:09:03.290280  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290287  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290314  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 21:09:03.290331  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 21:09:03.290336  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290342  328361 command_runner.go:130] >       "size": "94965812",
	I1205 21:09:03.290346  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290350  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290363  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290375  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290383  328361 command_runner.go:130] >     },
	I1205 21:09:03.290392  328361 command_runner.go:130] >     {
	I1205 21:09:03.290403  328361 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 21:09:03.290412  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290421  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 21:09:03.290428  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290434  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290448  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 21:09:03.290462  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 21:09:03.290468  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290472  328361 command_runner.go:130] >       "size": "94958644",
	I1205 21:09:03.290477  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290486  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290490  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290495  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290498  328361 command_runner.go:130] >     },
	I1205 21:09:03.290501  328361 command_runner.go:130] >     {
	I1205 21:09:03.290508  328361 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 21:09:03.290515  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290521  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 21:09:03.290539  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290546  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290553  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 21:09:03.290561  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 21:09:03.290565  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290570  328361 command_runner.go:130] >       "size": "1363676",
	I1205 21:09:03.290577  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290581  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290585  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290592  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290595  328361 command_runner.go:130] >     },
	I1205 21:09:03.290599  328361 command_runner.go:130] >     {
	I1205 21:09:03.290607  328361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 21:09:03.290611  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290618  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 21:09:03.290624  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290628  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290637  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 21:09:03.290649  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 21:09:03.290656  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290661  328361 command_runner.go:130] >       "size": "31470524",
	I1205 21:09:03.290667  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290670  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290674  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290680  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290684  328361 command_runner.go:130] >     },
	I1205 21:09:03.290689  328361 command_runner.go:130] >     {
	I1205 21:09:03.290695  328361 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 21:09:03.290702  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290707  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 21:09:03.290710  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290714  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290721  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 21:09:03.290730  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 21:09:03.290733  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290737  328361 command_runner.go:130] >       "size": "63273227",
	I1205 21:09:03.290741  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290746  328361 command_runner.go:130] >       "username": "nonroot",
	I1205 21:09:03.290750  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290753  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290757  328361 command_runner.go:130] >     },
	I1205 21:09:03.290762  328361 command_runner.go:130] >     {
	I1205 21:09:03.290768  328361 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 21:09:03.290774  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290779  328361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 21:09:03.290782  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290788  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290795  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 21:09:03.290801  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 21:09:03.290807  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290811  328361 command_runner.go:130] >       "size": "149009664",
	I1205 21:09:03.290817  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290822  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.290830  328361 command_runner.go:130] >       },
	I1205 21:09:03.290833  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290837  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290841  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290845  328361 command_runner.go:130] >     },
	I1205 21:09:03.290848  328361 command_runner.go:130] >     {
	I1205 21:09:03.290853  328361 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 21:09:03.290860  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290865  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 21:09:03.290870  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290874  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290882  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 21:09:03.290891  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 21:09:03.290895  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290899  328361 command_runner.go:130] >       "size": "95274464",
	I1205 21:09:03.290903  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290907  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.290910  328361 command_runner.go:130] >       },
	I1205 21:09:03.290914  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290918  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290924  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290927  328361 command_runner.go:130] >     },
	I1205 21:09:03.290931  328361 command_runner.go:130] >     {
	I1205 21:09:03.290936  328361 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 21:09:03.290942  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290947  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 21:09:03.290950  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290957  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290972  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 21:09:03.290982  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 21:09:03.290986  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290990  328361 command_runner.go:130] >       "size": "89474374",
	I1205 21:09:03.290994  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290999  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.291002  328361 command_runner.go:130] >       },
	I1205 21:09:03.291006  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291009  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291013  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291016  328361 command_runner.go:130] >     },
	I1205 21:09:03.291019  328361 command_runner.go:130] >     {
	I1205 21:09:03.291025  328361 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 21:09:03.291029  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291033  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 21:09:03.291036  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291044  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291051  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 21:09:03.291058  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 21:09:03.291061  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291065  328361 command_runner.go:130] >       "size": "92783513",
	I1205 21:09:03.291068  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.291072  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291075  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291079  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291083  328361 command_runner.go:130] >     },
	I1205 21:09:03.291088  328361 command_runner.go:130] >     {
	I1205 21:09:03.291109  328361 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 21:09:03.291119  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291124  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 21:09:03.291128  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291132  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291138  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 21:09:03.291145  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 21:09:03.291151  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291155  328361 command_runner.go:130] >       "size": "68457798",
	I1205 21:09:03.291159  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.291163  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.291168  328361 command_runner.go:130] >       },
	I1205 21:09:03.291172  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291176  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291182  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291185  328361 command_runner.go:130] >     },
	I1205 21:09:03.291188  328361 command_runner.go:130] >     {
	I1205 21:09:03.291194  328361 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 21:09:03.291239  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291279  328361 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 21:09:03.291288  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291295  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291310  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 21:09:03.291322  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 21:09:03.291331  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291337  328361 command_runner.go:130] >       "size": "742080",
	I1205 21:09:03.291347  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.291367  328361 command_runner.go:130] >         "value": "65535"
	I1205 21:09:03.291377  328361 command_runner.go:130] >       },
	I1205 21:09:03.291383  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291391  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291398  328361 command_runner.go:130] >       "pinned": true
	I1205 21:09:03.291407  328361 command_runner.go:130] >     }
	I1205 21:09:03.291412  328361 command_runner.go:130] >   ]
	I1205 21:09:03.291419  328361 command_runner.go:130] > }
	I1205 21:09:03.291643  328361 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:09:03.291655  328361 crio.go:433] Images already preloaded, skipping extraction
	I1205 21:09:03.291712  328361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:09:03.324462  328361 command_runner.go:130] > {
	I1205 21:09:03.324497  328361 command_runner.go:130] >   "images": [
	I1205 21:09:03.324504  328361 command_runner.go:130] >     {
	I1205 21:09:03.324515  328361 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 21:09:03.324522  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324546  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 21:09:03.324553  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324559  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324572  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 21:09:03.324583  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 21:09:03.324589  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324597  328361 command_runner.go:130] >       "size": "94965812",
	I1205 21:09:03.324607  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324613  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324627  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324631  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324635  328361 command_runner.go:130] >     },
	I1205 21:09:03.324638  328361 command_runner.go:130] >     {
	I1205 21:09:03.324644  328361 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 21:09:03.324651  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324655  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 21:09:03.324659  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324666  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324673  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 21:09:03.324680  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 21:09:03.324686  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324691  328361 command_runner.go:130] >       "size": "94958644",
	I1205 21:09:03.324694  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324700  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324706  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324710  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324713  328361 command_runner.go:130] >     },
	I1205 21:09:03.324719  328361 command_runner.go:130] >     {
	I1205 21:09:03.324727  328361 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 21:09:03.324733  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324738  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 21:09:03.324742  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324748  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324755  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 21:09:03.324762  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 21:09:03.324769  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324773  328361 command_runner.go:130] >       "size": "1363676",
	I1205 21:09:03.324777  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324781  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324796  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324802  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324805  328361 command_runner.go:130] >     },
	I1205 21:09:03.324808  328361 command_runner.go:130] >     {
	I1205 21:09:03.324814  328361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 21:09:03.324819  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324825  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 21:09:03.324828  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324832  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324840  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 21:09:03.324853  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 21:09:03.324860  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324864  328361 command_runner.go:130] >       "size": "31470524",
	I1205 21:09:03.324868  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324872  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324876  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324882  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324885  328361 command_runner.go:130] >     },
	I1205 21:09:03.324889  328361 command_runner.go:130] >     {
	I1205 21:09:03.324894  328361 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 21:09:03.324900  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324905  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 21:09:03.324909  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324914  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324921  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 21:09:03.324930  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 21:09:03.324935  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324941  328361 command_runner.go:130] >       "size": "63273227",
	I1205 21:09:03.324945  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324950  328361 command_runner.go:130] >       "username": "nonroot",
	I1205 21:09:03.324954  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324958  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324961  328361 command_runner.go:130] >     },
	I1205 21:09:03.324967  328361 command_runner.go:130] >     {
	I1205 21:09:03.324973  328361 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 21:09:03.324979  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324983  328361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 21:09:03.324987  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324991  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324998  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 21:09:03.325006  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 21:09:03.325010  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325017  328361 command_runner.go:130] >       "size": "149009664",
	I1205 21:09:03.325020  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325024  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325030  328361 command_runner.go:130] >       },
	I1205 21:09:03.325038  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325041  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325045  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325051  328361 command_runner.go:130] >     },
	I1205 21:09:03.325054  328361 command_runner.go:130] >     {
	I1205 21:09:03.325060  328361 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 21:09:03.325065  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325070  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 21:09:03.325074  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325078  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325086  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 21:09:03.325093  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 21:09:03.325096  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325100  328361 command_runner.go:130] >       "size": "95274464",
	I1205 21:09:03.325104  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325108  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325111  328361 command_runner.go:130] >       },
	I1205 21:09:03.325115  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325121  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325125  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325129  328361 command_runner.go:130] >     },
	I1205 21:09:03.325131  328361 command_runner.go:130] >     {
	I1205 21:09:03.325137  328361 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 21:09:03.325143  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325149  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 21:09:03.325153  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325159  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325185  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 21:09:03.325202  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 21:09:03.325207  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325215  328361 command_runner.go:130] >       "size": "89474374",
	I1205 21:09:03.325224  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325230  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325237  328361 command_runner.go:130] >       },
	I1205 21:09:03.325244  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325250  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325260  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325266  328361 command_runner.go:130] >     },
	I1205 21:09:03.325274  328361 command_runner.go:130] >     {
	I1205 21:09:03.325284  328361 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 21:09:03.325293  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325301  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 21:09:03.325310  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325318  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325325  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 21:09:03.325337  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 21:09:03.325343  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325350  328361 command_runner.go:130] >       "size": "92783513",
	I1205 21:09:03.325359  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.325366  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325376  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325382  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325391  328361 command_runner.go:130] >     },
	I1205 21:09:03.325397  328361 command_runner.go:130] >     {
	I1205 21:09:03.325411  328361 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 21:09:03.325418  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325426  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 21:09:03.325434  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325441  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325455  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 21:09:03.325465  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 21:09:03.325469  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325473  328361 command_runner.go:130] >       "size": "68457798",
	I1205 21:09:03.325477  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325481  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325484  328361 command_runner.go:130] >       },
	I1205 21:09:03.325488  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325492  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325498  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325507  328361 command_runner.go:130] >     },
	I1205 21:09:03.325513  328361 command_runner.go:130] >     {
	I1205 21:09:03.325526  328361 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 21:09:03.325543  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325554  328361 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 21:09:03.325561  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325570  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325581  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 21:09:03.325590  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 21:09:03.325594  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325627  328361 command_runner.go:130] >       "size": "742080",
	I1205 21:09:03.325650  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325654  328361 command_runner.go:130] >         "value": "65535"
	I1205 21:09:03.325657  328361 command_runner.go:130] >       },
	I1205 21:09:03.325661  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325665  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325669  328361 command_runner.go:130] >       "pinned": true
	I1205 21:09:03.325673  328361 command_runner.go:130] >     }
	I1205 21:09:03.325676  328361 command_runner.go:130] >   ]
	I1205 21:09:03.325679  328361 command_runner.go:130] > }
	I1205 21:09:03.325811  328361 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:09:03.325823  328361 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:09:03.325831  328361 kubeadm.go:934] updating node { 192.168.39.221 8443 v1.31.2 crio true true} ...
	I1205 21:09:03.325962  328361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-784478 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:09:03.326038  328361 ssh_runner.go:195] Run: crio config
	I1205 21:09:03.366903  328361 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 21:09:03.366950  328361 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 21:09:03.366961  328361 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 21:09:03.366967  328361 command_runner.go:130] > #
	I1205 21:09:03.366980  328361 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 21:09:03.366990  328361 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 21:09:03.367001  328361 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 21:09:03.367012  328361 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 21:09:03.367019  328361 command_runner.go:130] > # reload'.
	I1205 21:09:03.367028  328361 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 21:09:03.367043  328361 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 21:09:03.367057  328361 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 21:09:03.367068  328361 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 21:09:03.367077  328361 command_runner.go:130] > [crio]
	I1205 21:09:03.367089  328361 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 21:09:03.367115  328361 command_runner.go:130] > # containers images, in this directory.
	I1205 21:09:03.367223  328361 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 21:09:03.367254  328361 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 21:09:03.367262  328361 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 21:09:03.367274  328361 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 21:09:03.367282  328361 command_runner.go:130] > # imagestore = ""
	I1205 21:09:03.367293  328361 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 21:09:03.367310  328361 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 21:09:03.367318  328361 command_runner.go:130] > storage_driver = "overlay"
	I1205 21:09:03.367328  328361 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 21:09:03.367340  328361 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 21:09:03.367347  328361 command_runner.go:130] > storage_option = [
	I1205 21:09:03.367658  328361 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 21:09:03.367669  328361 command_runner.go:130] > ]
	I1205 21:09:03.367675  328361 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 21:09:03.367692  328361 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 21:09:03.367701  328361 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 21:09:03.367708  328361 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 21:09:03.367718  328361 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 21:09:03.367726  328361 command_runner.go:130] > # always happen on a node reboot
	I1205 21:09:03.367736  328361 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 21:09:03.367757  328361 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 21:09:03.367766  328361 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 21:09:03.367771  328361 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 21:09:03.367776  328361 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 21:09:03.367783  328361 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 21:09:03.367793  328361 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 21:09:03.367802  328361 command_runner.go:130] > # internal_wipe = true
	I1205 21:09:03.367817  328361 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 21:09:03.367829  328361 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 21:09:03.367839  328361 command_runner.go:130] > # internal_repair = false
	I1205 21:09:03.367848  328361 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 21:09:03.367860  328361 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 21:09:03.367872  328361 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 21:09:03.367879  328361 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 21:09:03.367888  328361 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 21:09:03.367898  328361 command_runner.go:130] > [crio.api]
	I1205 21:09:03.367907  328361 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 21:09:03.367921  328361 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 21:09:03.367932  328361 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 21:09:03.367938  328361 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 21:09:03.367951  328361 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 21:09:03.367963  328361 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 21:09:03.367970  328361 command_runner.go:130] > # stream_port = "0"
	I1205 21:09:03.367982  328361 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 21:09:03.367997  328361 command_runner.go:130] > # stream_enable_tls = false
	I1205 21:09:03.368007  328361 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 21:09:03.368020  328361 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 21:09:03.368030  328361 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 21:09:03.368043  328361 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 21:09:03.368052  328361 command_runner.go:130] > # minutes.
	I1205 21:09:03.368059  328361 command_runner.go:130] > # stream_tls_cert = ""
	I1205 21:09:03.368076  328361 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 21:09:03.368090  328361 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 21:09:03.368097  328361 command_runner.go:130] > # stream_tls_key = ""
	I1205 21:09:03.368110  328361 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 21:09:03.368122  328361 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 21:09:03.368139  328361 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 21:09:03.368146  328361 command_runner.go:130] > # stream_tls_ca = ""
	I1205 21:09:03.368156  328361 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 21:09:03.368166  328361 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 21:09:03.368178  328361 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 21:09:03.368189  328361 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 21:09:03.368198  328361 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 21:09:03.368210  328361 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 21:09:03.368218  328361 command_runner.go:130] > [crio.runtime]
	I1205 21:09:03.368232  328361 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 21:09:03.368244  328361 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 21:09:03.368254  328361 command_runner.go:130] > # "nofile=1024:2048"
	I1205 21:09:03.368264  328361 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 21:09:03.368273  328361 command_runner.go:130] > # default_ulimits = [
	I1205 21:09:03.368279  328361 command_runner.go:130] > # ]
	I1205 21:09:03.368292  328361 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 21:09:03.368301  328361 command_runner.go:130] > # no_pivot = false
	I1205 21:09:03.368310  328361 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 21:09:03.368320  328361 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 21:09:03.368328  328361 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 21:09:03.368342  328361 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 21:09:03.368353  328361 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 21:09:03.368365  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 21:09:03.368376  328361 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 21:09:03.368383  328361 command_runner.go:130] > # Cgroup setting for conmon
	I1205 21:09:03.368398  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 21:09:03.368409  328361 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 21:09:03.368423  328361 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 21:09:03.368435  328361 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 21:09:03.368445  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 21:09:03.368454  328361 command_runner.go:130] > conmon_env = [
	I1205 21:09:03.368463  328361 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 21:09:03.368477  328361 command_runner.go:130] > ]
	I1205 21:09:03.368489  328361 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 21:09:03.368499  328361 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 21:09:03.368524  328361 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 21:09:03.368533  328361 command_runner.go:130] > # default_env = [
	I1205 21:09:03.368539  328361 command_runner.go:130] > # ]
	I1205 21:09:03.368551  328361 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 21:09:03.368561  328361 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 21:09:03.368568  328361 command_runner.go:130] > # selinux = false
	I1205 21:09:03.368574  328361 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 21:09:03.368580  328361 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 21:09:03.368587  328361 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 21:09:03.368593  328361 command_runner.go:130] > # seccomp_profile = ""
	I1205 21:09:03.368605  328361 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 21:09:03.368622  328361 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 21:09:03.368635  328361 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 21:09:03.368647  328361 command_runner.go:130] > # which might increase security.
	I1205 21:09:03.368655  328361 command_runner.go:130] > # This option is currently deprecated,
	I1205 21:09:03.368667  328361 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 21:09:03.368675  328361 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 21:09:03.368688  328361 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 21:09:03.368702  328361 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 21:09:03.368715  328361 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 21:09:03.368729  328361 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 21:09:03.368740  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.368751  328361 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 21:09:03.368765  328361 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 21:09:03.368773  328361 command_runner.go:130] > # the cgroup blockio controller.
	I1205 21:09:03.368782  328361 command_runner.go:130] > # blockio_config_file = ""
	I1205 21:09:03.368792  328361 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 21:09:03.368800  328361 command_runner.go:130] > # blockio parameters.
	I1205 21:09:03.368807  328361 command_runner.go:130] > # blockio_reload = false
	I1205 21:09:03.368819  328361 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 21:09:03.368828  328361 command_runner.go:130] > # irqbalance daemon.
	I1205 21:09:03.368837  328361 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 21:09:03.368850  328361 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 21:09:03.368866  328361 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 21:09:03.368877  328361 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 21:09:03.368898  328361 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 21:09:03.368912  328361 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 21:09:03.368926  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.368936  328361 command_runner.go:130] > # rdt_config_file = ""
	I1205 21:09:03.368945  328361 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 21:09:03.368960  328361 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 21:09:03.368982  328361 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 21:09:03.368993  328361 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 21:09:03.369004  328361 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 21:09:03.369017  328361 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 21:09:03.369027  328361 command_runner.go:130] > # will be added.
	I1205 21:09:03.369034  328361 command_runner.go:130] > # default_capabilities = [
	I1205 21:09:03.369043  328361 command_runner.go:130] > # 	"CHOWN",
	I1205 21:09:03.369049  328361 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 21:09:03.369060  328361 command_runner.go:130] > # 	"FSETID",
	I1205 21:09:03.369066  328361 command_runner.go:130] > # 	"FOWNER",
	I1205 21:09:03.369074  328361 command_runner.go:130] > # 	"SETGID",
	I1205 21:09:03.369080  328361 command_runner.go:130] > # 	"SETUID",
	I1205 21:09:03.369089  328361 command_runner.go:130] > # 	"SETPCAP",
	I1205 21:09:03.369100  328361 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 21:09:03.369109  328361 command_runner.go:130] > # 	"KILL",
	I1205 21:09:03.369117  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369132  328361 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 21:09:03.369147  328361 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 21:09:03.369158  328361 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 21:09:03.369170  328361 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 21:09:03.369183  328361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 21:09:03.369193  328361 command_runner.go:130] > default_sysctls = [
	I1205 21:09:03.369200  328361 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 21:09:03.369208  328361 command_runner.go:130] > ]
	I1205 21:09:03.369216  328361 command_runner.go:130] > # List of devices on the host that a
	I1205 21:09:03.369229  328361 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 21:09:03.369237  328361 command_runner.go:130] > # allowed_devices = [
	I1205 21:09:03.369241  328361 command_runner.go:130] > # 	"/dev/fuse",
	I1205 21:09:03.369244  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369249  328361 command_runner.go:130] > # List of additional devices. specified as
	I1205 21:09:03.369261  328361 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 21:09:03.369272  328361 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 21:09:03.369281  328361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 21:09:03.369290  328361 command_runner.go:130] > # additional_devices = [
	I1205 21:09:03.369296  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369308  328361 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 21:09:03.369322  328361 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 21:09:03.369328  328361 command_runner.go:130] > # 	"/etc/cdi",
	I1205 21:09:03.369337  328361 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 21:09:03.369342  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369352  328361 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 21:09:03.369365  328361 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 21:09:03.369375  328361 command_runner.go:130] > # Defaults to false.
	I1205 21:09:03.369384  328361 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 21:09:03.369396  328361 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 21:09:03.369409  328361 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 21:09:03.369413  328361 command_runner.go:130] > # hooks_dir = [
	I1205 21:09:03.369419  328361 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 21:09:03.369424  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369432  328361 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 21:09:03.369442  328361 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 21:09:03.369454  328361 command_runner.go:130] > # its default mounts from the following two files:
	I1205 21:09:03.369459  328361 command_runner.go:130] > #
	I1205 21:09:03.369469  328361 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 21:09:03.369482  328361 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 21:09:03.369494  328361 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 21:09:03.369503  328361 command_runner.go:130] > #
	I1205 21:09:03.369519  328361 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 21:09:03.369532  328361 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 21:09:03.369546  328361 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 21:09:03.369557  328361 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 21:09:03.369562  328361 command_runner.go:130] > #
	I1205 21:09:03.369573  328361 command_runner.go:130] > # default_mounts_file = ""
	I1205 21:09:03.369585  328361 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 21:09:03.369599  328361 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 21:09:03.369608  328361 command_runner.go:130] > pids_limit = 1024
	I1205 21:09:03.369619  328361 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 21:09:03.369632  328361 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 21:09:03.369643  328361 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 21:09:03.369659  328361 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 21:09:03.369668  328361 command_runner.go:130] > # log_size_max = -1
	I1205 21:09:03.369679  328361 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 21:09:03.369689  328361 command_runner.go:130] > # log_to_journald = false
	I1205 21:09:03.369699  328361 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 21:09:03.369710  328361 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 21:09:03.369724  328361 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 21:09:03.369735  328361 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 21:09:03.369744  328361 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 21:09:03.369753  328361 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 21:09:03.369762  328361 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 21:09:03.369770  328361 command_runner.go:130] > # read_only = false
	I1205 21:09:03.369783  328361 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 21:09:03.369796  328361 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 21:09:03.369806  328361 command_runner.go:130] > # live configuration reload.
	I1205 21:09:03.369813  328361 command_runner.go:130] > # log_level = "info"
	I1205 21:09:03.369826  328361 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 21:09:03.369835  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.369843  328361 command_runner.go:130] > # log_filter = ""
	I1205 21:09:03.369849  328361 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 21:09:03.369855  328361 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 21:09:03.369859  328361 command_runner.go:130] > # separated by comma.
	I1205 21:09:03.369866  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369873  328361 command_runner.go:130] > # uid_mappings = ""
	I1205 21:09:03.369879  328361 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 21:09:03.369885  328361 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 21:09:03.369888  328361 command_runner.go:130] > # separated by comma.
	I1205 21:09:03.369895  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369921  328361 command_runner.go:130] > # gid_mappings = ""
	I1205 21:09:03.369934  328361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 21:09:03.369946  328361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 21:09:03.369959  328361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 21:09:03.369974  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369984  328361 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 21:09:03.369994  328361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 21:09:03.370007  328361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 21:09:03.370020  328361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 21:09:03.370032  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.370042  328361 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 21:09:03.370051  328361 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 21:09:03.370063  328361 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 21:09:03.370078  328361 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 21:09:03.370091  328361 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 21:09:03.370103  328361 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 21:09:03.370115  328361 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 21:09:03.370129  328361 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 21:09:03.370137  328361 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 21:09:03.370146  328361 command_runner.go:130] > drop_infra_ctr = false
	I1205 21:09:03.370157  328361 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 21:09:03.370169  328361 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 21:09:03.370182  328361 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 21:09:03.370186  328361 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 21:09:03.370197  328361 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 21:09:03.370211  328361 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 21:09:03.370224  328361 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 21:09:03.370235  328361 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 21:09:03.370245  328361 command_runner.go:130] > # shared_cpuset = ""
	I1205 21:09:03.370256  328361 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 21:09:03.370268  328361 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 21:09:03.370277  328361 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 21:09:03.370288  328361 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 21:09:03.370296  328361 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 21:09:03.370306  328361 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 21:09:03.370321  328361 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 21:09:03.370328  328361 command_runner.go:130] > # enable_criu_support = false
	I1205 21:09:03.370336  328361 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 21:09:03.370345  328361 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 21:09:03.370356  328361 command_runner.go:130] > # enable_pod_events = false
	I1205 21:09:03.370367  328361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 21:09:03.370380  328361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 21:09:03.370392  328361 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 21:09:03.370399  328361 command_runner.go:130] > # default_runtime = "runc"
	I1205 21:09:03.370410  328361 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 21:09:03.370424  328361 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 21:09:03.370438  328361 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 21:09:03.370449  328361 command_runner.go:130] > # creation as a file is not desired either.
	I1205 21:09:03.370462  328361 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 21:09:03.370479  328361 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 21:09:03.370491  328361 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 21:09:03.370500  328361 command_runner.go:130] > # ]
	I1205 21:09:03.370515  328361 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 21:09:03.370529  328361 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 21:09:03.370542  328361 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 21:09:03.370554  328361 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 21:09:03.370562  328361 command_runner.go:130] > #
	I1205 21:09:03.370573  328361 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 21:09:03.370584  328361 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 21:09:03.370609  328361 command_runner.go:130] > # runtime_type = "oci"
	I1205 21:09:03.370616  328361 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 21:09:03.370624  328361 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 21:09:03.370631  328361 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 21:09:03.370643  328361 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 21:09:03.370649  328361 command_runner.go:130] > # monitor_env = []
	I1205 21:09:03.370660  328361 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 21:09:03.370667  328361 command_runner.go:130] > # allowed_annotations = []
	I1205 21:09:03.370676  328361 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 21:09:03.370682  328361 command_runner.go:130] > # Where:
	I1205 21:09:03.370690  328361 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 21:09:03.370700  328361 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 21:09:03.370709  328361 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 21:09:03.370722  328361 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 21:09:03.370732  328361 command_runner.go:130] > #   in $PATH.
	I1205 21:09:03.370742  328361 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 21:09:03.370753  328361 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 21:09:03.370763  328361 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 21:09:03.370771  328361 command_runner.go:130] > #   state.
	I1205 21:09:03.370781  328361 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 21:09:03.370789  328361 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 21:09:03.370799  328361 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 21:09:03.370812  328361 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 21:09:03.370825  328361 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 21:09:03.370840  328361 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 21:09:03.370850  328361 command_runner.go:130] > #   The currently recognized values are:
	I1205 21:09:03.370860  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 21:09:03.370875  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 21:09:03.370892  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 21:09:03.370905  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 21:09:03.370921  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 21:09:03.370934  328361 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 21:09:03.370947  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 21:09:03.370957  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 21:09:03.370969  328361 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 21:09:03.370982  328361 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 21:09:03.370993  328361 command_runner.go:130] > #   deprecated option "conmon".
	I1205 21:09:03.371008  328361 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 21:09:03.371019  328361 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 21:09:03.371032  328361 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 21:09:03.371043  328361 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 21:09:03.371058  328361 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 21:09:03.371070  328361 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 21:09:03.371083  328361 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 21:09:03.371095  328361 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 21:09:03.371104  328361 command_runner.go:130] > #
	I1205 21:09:03.371112  328361 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 21:09:03.371120  328361 command_runner.go:130] > #
	I1205 21:09:03.371129  328361 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 21:09:03.371142  328361 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 21:09:03.371151  328361 command_runner.go:130] > #
	I1205 21:09:03.371161  328361 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 21:09:03.371174  328361 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 21:09:03.371183  328361 command_runner.go:130] > #
	I1205 21:09:03.371193  328361 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 21:09:03.371203  328361 command_runner.go:130] > # feature.
	I1205 21:09:03.371208  328361 command_runner.go:130] > #
	I1205 21:09:03.371223  328361 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 21:09:03.371235  328361 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 21:09:03.371249  328361 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 21:09:03.371259  328361 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 21:09:03.371269  328361 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 21:09:03.371278  328361 command_runner.go:130] > #
	I1205 21:09:03.371292  328361 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 21:09:03.371306  328361 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 21:09:03.371312  328361 command_runner.go:130] > #
	I1205 21:09:03.371326  328361 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 21:09:03.371337  328361 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 21:09:03.371346  328361 command_runner.go:130] > #
	I1205 21:09:03.371359  328361 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 21:09:03.371371  328361 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 21:09:03.371377  328361 command_runner.go:130] > # limitation.
	I1205 21:09:03.371382  328361 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 21:09:03.371392  328361 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 21:09:03.371398  328361 command_runner.go:130] > runtime_type = "oci"
	I1205 21:09:03.371408  328361 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 21:09:03.371414  328361 command_runner.go:130] > runtime_config_path = ""
	I1205 21:09:03.371422  328361 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 21:09:03.371433  328361 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 21:09:03.371440  328361 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 21:09:03.371449  328361 command_runner.go:130] > monitor_env = [
	I1205 21:09:03.371459  328361 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 21:09:03.371468  328361 command_runner.go:130] > ]
	I1205 21:09:03.371475  328361 command_runner.go:130] > privileged_without_host_devices = false
	I1205 21:09:03.371488  328361 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 21:09:03.371500  328361 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 21:09:03.371516  328361 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 21:09:03.371529  328361 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 21:09:03.371545  328361 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 21:09:03.371558  328361 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 21:09:03.371579  328361 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 21:09:03.371595  328361 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 21:09:03.371609  328361 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 21:09:03.371621  328361 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 21:09:03.371627  328361 command_runner.go:130] > # Example:
	I1205 21:09:03.371634  328361 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 21:09:03.371642  328361 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 21:09:03.371650  328361 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 21:09:03.371659  328361 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 21:09:03.371665  328361 command_runner.go:130] > # cpuset = 0
	I1205 21:09:03.371671  328361 command_runner.go:130] > # cpushares = "0-1"
	I1205 21:09:03.371677  328361 command_runner.go:130] > # Where:
	I1205 21:09:03.371686  328361 command_runner.go:130] > # The workload name is workload-type.
	I1205 21:09:03.371693  328361 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 21:09:03.371703  328361 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 21:09:03.371713  328361 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 21:09:03.371725  328361 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 21:09:03.371734  328361 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 21:09:03.371742  328361 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 21:09:03.371753  328361 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 21:09:03.371759  328361 command_runner.go:130] > # Default value is set to true
	I1205 21:09:03.371766  328361 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 21:09:03.371773  328361 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 21:09:03.371777  328361 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 21:09:03.371784  328361 command_runner.go:130] > # Default value is set to 'false'
	I1205 21:09:03.371791  328361 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 21:09:03.371802  328361 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 21:09:03.371807  328361 command_runner.go:130] > #
	I1205 21:09:03.371819  328361 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 21:09:03.371828  328361 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 21:09:03.371842  328361 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 21:09:03.371855  328361 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 21:09:03.371863  328361 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 21:09:03.371872  328361 command_runner.go:130] > [crio.image]
	I1205 21:09:03.371886  328361 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 21:09:03.371896  328361 command_runner.go:130] > # default_transport = "docker://"
	I1205 21:09:03.371909  328361 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 21:09:03.371921  328361 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 21:09:03.371930  328361 command_runner.go:130] > # global_auth_file = ""
	I1205 21:09:03.371941  328361 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 21:09:03.371952  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.371962  328361 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 21:09:03.371977  328361 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 21:09:03.371991  328361 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 21:09:03.372002  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.372012  328361 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 21:09:03.372116  328361 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 21:09:03.372146  328361 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 21:09:03.372168  328361 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 21:09:03.372180  328361 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 21:09:03.372272  328361 command_runner.go:130] > # pause_command = "/pause"
	I1205 21:09:03.372297  328361 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 21:09:03.372312  328361 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 21:09:03.372326  328361 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 21:09:03.372341  328361 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 21:09:03.372355  328361 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 21:09:03.372367  328361 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 21:09:03.372379  328361 command_runner.go:130] > # pinned_images = [
	I1205 21:09:03.372390  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372404  328361 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 21:09:03.372420  328361 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 21:09:03.372435  328361 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 21:09:03.372446  328361 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 21:09:03.372454  328361 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 21:09:03.372459  328361 command_runner.go:130] > # signature_policy = ""
	I1205 21:09:03.372470  328361 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 21:09:03.372495  328361 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 21:09:03.372510  328361 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 21:09:03.372524  328361 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 21:09:03.372538  328361 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 21:09:03.372549  328361 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 21:09:03.372565  328361 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 21:09:03.372581  328361 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 21:09:03.372593  328361 command_runner.go:130] > # changing them here.
	I1205 21:09:03.372605  328361 command_runner.go:130] > # insecure_registries = [
	I1205 21:09:03.372615  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372629  328361 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 21:09:03.372642  328361 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 21:09:03.372649  328361 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 21:09:03.372656  328361 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 21:09:03.372664  328361 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 21:09:03.372675  328361 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 21:09:03.372683  328361 command_runner.go:130] > # CNI plugins.
	I1205 21:09:03.372689  328361 command_runner.go:130] > [crio.network]
	I1205 21:09:03.372700  328361 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 21:09:03.372719  328361 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 21:09:03.372732  328361 command_runner.go:130] > # cni_default_network = ""
	I1205 21:09:03.372746  328361 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 21:09:03.372758  328361 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 21:09:03.372770  328361 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 21:09:03.372779  328361 command_runner.go:130] > # plugin_dirs = [
	I1205 21:09:03.372786  328361 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 21:09:03.372796  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372807  328361 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 21:09:03.372817  328361 command_runner.go:130] > [crio.metrics]
	I1205 21:09:03.372828  328361 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 21:09:03.372835  328361 command_runner.go:130] > enable_metrics = true
	I1205 21:09:03.372847  328361 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 21:09:03.372859  328361 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 21:09:03.372878  328361 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 21:09:03.372891  328361 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 21:09:03.372905  328361 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 21:09:03.372915  328361 command_runner.go:130] > # metrics_collectors = [
	I1205 21:09:03.372923  328361 command_runner.go:130] > # 	"operations",
	I1205 21:09:03.372943  328361 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 21:09:03.372952  328361 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 21:09:03.372960  328361 command_runner.go:130] > # 	"operations_errors",
	I1205 21:09:03.372969  328361 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 21:09:03.372978  328361 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 21:09:03.372990  328361 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 21:09:03.372998  328361 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 21:09:03.373008  328361 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 21:09:03.373013  328361 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 21:09:03.373020  328361 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 21:09:03.373025  328361 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 21:09:03.373030  328361 command_runner.go:130] > # 	"containers_oom_total",
	I1205 21:09:03.373035  328361 command_runner.go:130] > # 	"containers_oom",
	I1205 21:09:03.373040  328361 command_runner.go:130] > # 	"processes_defunct",
	I1205 21:09:03.373044  328361 command_runner.go:130] > # 	"operations_total",
	I1205 21:09:03.373049  328361 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 21:09:03.373054  328361 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 21:09:03.373060  328361 command_runner.go:130] > # 	"operations_errors_total",
	I1205 21:09:03.373065  328361 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 21:09:03.373072  328361 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 21:09:03.373077  328361 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 21:09:03.373082  328361 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 21:09:03.373087  328361 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 21:09:03.373092  328361 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 21:09:03.373098  328361 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 21:09:03.373102  328361 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 21:09:03.373115  328361 command_runner.go:130] > # ]
	I1205 21:09:03.373124  328361 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 21:09:03.373130  328361 command_runner.go:130] > # metrics_port = 9090
	I1205 21:09:03.373138  328361 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 21:09:03.373144  328361 command_runner.go:130] > # metrics_socket = ""
	I1205 21:09:03.373149  328361 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 21:09:03.373158  328361 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 21:09:03.373167  328361 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 21:09:03.373175  328361 command_runner.go:130] > # certificate on any modification event.
	I1205 21:09:03.373179  328361 command_runner.go:130] > # metrics_cert = ""
	I1205 21:09:03.373187  328361 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 21:09:03.373192  328361 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 21:09:03.373196  328361 command_runner.go:130] > # metrics_key = ""
	I1205 21:09:03.373203  328361 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 21:09:03.373207  328361 command_runner.go:130] > [crio.tracing]
	I1205 21:09:03.373213  328361 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 21:09:03.373219  328361 command_runner.go:130] > # enable_tracing = false
	I1205 21:09:03.373224  328361 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 21:09:03.373229  328361 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 21:09:03.373241  328361 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 21:09:03.373250  328361 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 21:09:03.373254  328361 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 21:09:03.373260  328361 command_runner.go:130] > [crio.nri]
	I1205 21:09:03.373264  328361 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 21:09:03.373268  328361 command_runner.go:130] > # enable_nri = false
	I1205 21:09:03.373272  328361 command_runner.go:130] > # NRI socket to listen on.
	I1205 21:09:03.373276  328361 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 21:09:03.373283  328361 command_runner.go:130] > # NRI plugin directory to use.
	I1205 21:09:03.373288  328361 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 21:09:03.373292  328361 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 21:09:03.373300  328361 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 21:09:03.373305  328361 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 21:09:03.373312  328361 command_runner.go:130] > # nri_disable_connections = false
	I1205 21:09:03.373317  328361 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 21:09:03.373322  328361 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 21:09:03.373331  328361 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 21:09:03.373338  328361 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 21:09:03.373345  328361 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 21:09:03.373351  328361 command_runner.go:130] > [crio.stats]
	I1205 21:09:03.373357  328361 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 21:09:03.373365  328361 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 21:09:03.373370  328361 command_runner.go:130] > # stats_collection_period = 0
	I1205 21:09:03.373394  328361 command_runner.go:130] ! time="2024-12-05 21:09:03.336802496Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 21:09:03.373413  328361 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 21:09:03.373495  328361 cni.go:84] Creating CNI manager for ""
	I1205 21:09:03.373506  328361 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 21:09:03.373527  328361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:09:03.373556  328361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-784478 NodeName:multinode-784478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:09:03.373686  328361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-784478"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.221"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:09:03.373769  328361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:09:03.385007  328361 command_runner.go:130] > kubeadm
	I1205 21:09:03.385037  328361 command_runner.go:130] > kubectl
	I1205 21:09:03.385042  328361 command_runner.go:130] > kubelet
	I1205 21:09:03.385070  328361 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:09:03.385124  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:09:03.395115  328361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:09:03.413601  328361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:09:03.431025  328361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 21:09:03.448659  328361 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I1205 21:09:03.452751  328361 command_runner.go:130] > 192.168.39.221	control-plane.minikube.internal
	I1205 21:09:03.452856  328361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:09:03.602655  328361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:09:03.617016  328361 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478 for IP: 192.168.39.221
	I1205 21:09:03.617056  328361 certs.go:194] generating shared ca certs ...
	I1205 21:09:03.617090  328361 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:09:03.617338  328361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:09:03.617420  328361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:09:03.617440  328361 certs.go:256] generating profile certs ...
	I1205 21:09:03.617591  328361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/client.key
	I1205 21:09:03.617720  328361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key.cf7f1278
	I1205 21:09:03.617775  328361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key
	I1205 21:09:03.617789  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 21:09:03.617804  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 21:09:03.617820  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 21:09:03.617834  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 21:09:03.617850  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 21:09:03.617866  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 21:09:03.617881  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 21:09:03.617928  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 21:09:03.617995  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:09:03.618032  328361 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:09:03.618045  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:09:03.618070  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:09:03.618097  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:09:03.618125  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:09:03.618168  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:09:03.618203  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.618227  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.618249  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.618925  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:09:03.643833  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:09:03.667221  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:09:03.690858  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:09:03.716413  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:09:03.741888  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:09:03.766125  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:09:03.790422  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:09:03.814715  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:09:03.840125  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:09:03.866425  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:09:03.890707  328361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:09:03.908638  328361 ssh_runner.go:195] Run: openssl version
	I1205 21:09:03.914789  328361 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 21:09:03.914872  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:09:03.925739  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930764  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930898  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930964  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.936453  328361 command_runner.go:130] > b5213941
	I1205 21:09:03.936680  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:09:03.946269  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:09:03.957115  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961809  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961850  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961925  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.967354  328361 command_runner.go:130] > 51391683
	I1205 21:09:03.967555  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:09:03.977618  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:09:03.989286  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.993976  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.994014  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.994066  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.999672  328361 command_runner.go:130] > 3ec20f2e
	I1205 21:09:03.999835  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:09:04.009807  328361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:09:04.014820  328361 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:09:04.014844  328361 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 21:09:04.014850  328361 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1205 21:09:04.014856  328361 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 21:09:04.014862  328361 command_runner.go:130] > Access: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014867  328361 command_runner.go:130] > Modify: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014871  328361 command_runner.go:130] > Change: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014876  328361 command_runner.go:130] >  Birth: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.015021  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:09:04.021043  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.021139  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:09:04.027104  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.027184  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:09:04.033341  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.033429  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:09:04.039115  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.039193  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:09:04.044953  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.045063  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:09:04.050706  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.050808  328361 kubeadm.go:392] StartCluster: {Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:09:04.050962  328361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:09:04.051041  328361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:09:04.088733  328361 command_runner.go:130] > 7ae6c8c63b666abcd70534dc44b983d5a2ac7068a9c0d2735c8f333171e6704a
	I1205 21:09:04.088778  328361 command_runner.go:130] > 23954e6fc7030a61b309420f9c1ee92c19c3163be22fe724e2693f329bc258fa
	I1205 21:09:04.088789  328361 command_runner.go:130] > c4ae8603237b7363d337191b759dff81f3803834d7a78b887283bfec8f374c22
	I1205 21:09:04.088801  328361 command_runner.go:130] > c7439ccea25b3c22c01349b0281f7d919942c651f6cdd50c7420ac9900fdcf97
	I1205 21:09:04.088811  328361 command_runner.go:130] > a035a4c35a3d37f88dda7585de31856477aa11367b734a800adf1e640ee184b8
	I1205 21:09:04.088821  328361 command_runner.go:130] > d54a20ed49c2cb44f1ccfd184701fefdbac3bdd2f0552340c9b9e05fd665b99d
	I1205 21:09:04.088830  328361 command_runner.go:130] > 835f877ded47ea873773ee1bec77d608d253f45badc6fac50e78b4979967c1f3
	I1205 21:09:04.088842  328361 command_runner.go:130] > 34aa39e4fed308ba370200aec168d9a4ac4d311778e26424db4ffdfc05ab9516
	I1205 21:09:04.088851  328361 command_runner.go:130] > f28e9f0aedb62867545b31452c8208ffc66fc1d6d01dd71719292a8e8ed9d2f1
	I1205 21:09:04.088886  328361 cri.go:89] found id: "7ae6c8c63b666abcd70534dc44b983d5a2ac7068a9c0d2735c8f333171e6704a"
	I1205 21:09:04.088899  328361 cri.go:89] found id: "23954e6fc7030a61b309420f9c1ee92c19c3163be22fe724e2693f329bc258fa"
	I1205 21:09:04.088906  328361 cri.go:89] found id: "c4ae8603237b7363d337191b759dff81f3803834d7a78b887283bfec8f374c22"
	I1205 21:09:04.088910  328361 cri.go:89] found id: "c7439ccea25b3c22c01349b0281f7d919942c651f6cdd50c7420ac9900fdcf97"
	I1205 21:09:04.088918  328361 cri.go:89] found id: "a035a4c35a3d37f88dda7585de31856477aa11367b734a800adf1e640ee184b8"
	I1205 21:09:04.088921  328361 cri.go:89] found id: "d54a20ed49c2cb44f1ccfd184701fefdbac3bdd2f0552340c9b9e05fd665b99d"
	I1205 21:09:04.088925  328361 cri.go:89] found id: "835f877ded47ea873773ee1bec77d608d253f45badc6fac50e78b4979967c1f3"
	I1205 21:09:04.088928  328361 cri.go:89] found id: "34aa39e4fed308ba370200aec168d9a4ac4d311778e26424db4ffdfc05ab9516"
	I1205 21:09:04.088931  328361 cri.go:89] found id: "f28e9f0aedb62867545b31452c8208ffc66fc1d6d01dd71719292a8e8ed9d2f1"
	I1205 21:09:04.088937  328361 cri.go:89] found id: ""
	I1205 21:09:04.088993  328361 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-784478 -n multinode-784478
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-784478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (334.78s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (145.29s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 stop
E1205 21:11:19.391877  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:11:49.076243  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-784478 stop: exit status 82 (2m0.499413854s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-784478-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-784478 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status
multinode_test.go:351: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 status: (18.71455798s)
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
E1205 21:13:16.319251  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:358: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr: (3.391262027s)
multinode_test.go:364: incorrect number of stopped hosts: args "out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr": 
multinode_test.go:368: incorrect number of stopped kubelets: args "out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr": 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-784478 -n multinode-784478
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 logs -n 25: (2.04060125s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478:/home/docker/cp-test_multinode-784478-m02_multinode-784478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478 sudo cat                                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m02_multinode-784478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03:/home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478-m03 sudo cat                                   | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp testdata/cp-test.txt                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478:/home/docker/cp-test_multinode-784478-m03_multinode-784478.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478 sudo cat                                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02:/home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478-m02 sudo cat                                   | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-784478 node stop m03                                                          | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	| node    | multinode-784478 node start                                                             | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| stop    | -p multinode-784478                                                                     | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| start   | -p multinode-784478                                                                     | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:07 UTC | 05 Dec 24 21:10 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC |                     |
	| node    | multinode-784478 node delete                                                            | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC | 05 Dec 24 21:10 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-784478 stop                                                                   | multinode-784478 | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:07:20
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:07:20.239334  328361 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:07:20.239472  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:07:20.239483  328361 out.go:358] Setting ErrFile to fd 2...
	I1205 21:07:20.239487  328361 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:07:20.239662  328361 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:07:20.240255  328361 out.go:352] Setting JSON to false
	I1205 21:07:20.241271  328361 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":13788,"bootTime":1733419052,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:07:20.241344  328361 start.go:139] virtualization: kvm guest
	I1205 21:07:20.243929  328361 out.go:177] * [multinode-784478] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:07:20.245678  328361 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:07:20.245687  328361 notify.go:220] Checking for updates...
	I1205 21:07:20.248790  328361 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:07:20.250329  328361 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:07:20.251616  328361 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:07:20.253206  328361 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:07:20.254658  328361 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:07:20.256585  328361 config.go:182] Loaded profile config "multinode-784478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:07:20.256740  328361 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:07:20.257439  328361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:07:20.257532  328361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:07:20.275213  328361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43441
	I1205 21:07:20.275881  328361 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:07:20.276491  328361 main.go:141] libmachine: Using API Version  1
	I1205 21:07:20.276515  328361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:07:20.276984  328361 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:07:20.277213  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.317149  328361 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:07:20.318681  328361 start.go:297] selected driver: kvm2
	I1205 21:07:20.318708  328361 start.go:901] validating driver "kvm2" against &{Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress
:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:07:20.318880  328361 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:07:20.319262  328361 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:07:20.319373  328361 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:07:20.336226  328361 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:07:20.337071  328361 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:07:20.337110  328361 cni.go:84] Creating CNI manager for ""
	I1205 21:07:20.337148  328361 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 21:07:20.337216  328361 start.go:340] cluster config:
	{Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisio
ner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQem
uFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:07:20.337346  328361 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:07:20.339361  328361 out.go:177] * Starting "multinode-784478" primary control-plane node in "multinode-784478" cluster
	I1205 21:07:20.341173  328361 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:07:20.341231  328361 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:07:20.341244  328361 cache.go:56] Caching tarball of preloaded images
	I1205 21:07:20.341411  328361 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:07:20.341429  328361 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:07:20.341574  328361 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/config.json ...
	I1205 21:07:20.341825  328361 start.go:360] acquireMachinesLock for multinode-784478: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:07:20.341889  328361 start.go:364] duration metric: took 40.294µs to acquireMachinesLock for "multinode-784478"
	I1205 21:07:20.341931  328361 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:07:20.341941  328361 fix.go:54] fixHost starting: 
	I1205 21:07:20.342247  328361 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:07:20.342304  328361 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:07:20.358047  328361 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43267
	I1205 21:07:20.358705  328361 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:07:20.359295  328361 main.go:141] libmachine: Using API Version  1
	I1205 21:07:20.359323  328361 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:07:20.359699  328361 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:07:20.359898  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.360044  328361 main.go:141] libmachine: (multinode-784478) Calling .GetState
	I1205 21:07:20.361794  328361 fix.go:112] recreateIfNeeded on multinode-784478: state=Running err=<nil>
	W1205 21:07:20.361821  328361 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:07:20.364072  328361 out.go:177] * Updating the running kvm2 "multinode-784478" VM ...
	I1205 21:07:20.365667  328361 machine.go:93] provisionDockerMachine start ...
	I1205 21:07:20.365690  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:07:20.365962  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.368832  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.369315  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.369357  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.369483  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.369707  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.369866  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.370000  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.370166  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.370428  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.370443  328361 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:07:20.474439  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784478
	
	I1205 21:07:20.474472  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.474750  328361 buildroot.go:166] provisioning hostname "multinode-784478"
	I1205 21:07:20.474777  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.474971  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.478126  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.478549  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.478648  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.478906  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.479141  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.479320  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.479485  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.479682  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.479887  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.479904  328361 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-784478 && echo "multinode-784478" | sudo tee /etc/hostname
	I1205 21:07:20.597040  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-784478
	
	I1205 21:07:20.597070  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.600395  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.600683  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.600710  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.600960  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.601235  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.601541  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.601762  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.601974  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:20.602188  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:20.602212  328361 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-784478' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-784478/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-784478' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:07:20.703175  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:07:20.703218  328361 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:07:20.703267  328361 buildroot.go:174] setting up certificates
	I1205 21:07:20.703286  328361 provision.go:84] configureAuth start
	I1205 21:07:20.703302  328361 main.go:141] libmachine: (multinode-784478) Calling .GetMachineName
	I1205 21:07:20.703688  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:07:20.707150  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.707577  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.707609  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.707788  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.710730  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.711165  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.711208  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.711355  328361 provision.go:143] copyHostCerts
	I1205 21:07:20.711390  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:07:20.711423  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:07:20.711441  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:07:20.711512  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:07:20.712072  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:07:20.712198  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:07:20.712217  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:07:20.712323  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:07:20.712441  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:07:20.712493  328361 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:07:20.712510  328361 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:07:20.712571  328361 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:07:20.712683  328361 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.multinode-784478 san=[127.0.0.1 192.168.39.221 localhost minikube multinode-784478]
	I1205 21:07:20.874659  328361 provision.go:177] copyRemoteCerts
	I1205 21:07:20.874730  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:07:20.874776  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:20.877768  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.878191  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:20.878230  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:20.878441  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:20.878683  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:20.878831  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:20.879030  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:07:20.960728  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I1205 21:07:20.960827  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:07:20.986382  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem -> /etc/docker/server.pem
	I1205 21:07:20.986475  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I1205 21:07:21.010950  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I1205 21:07:21.011041  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:07:21.040118  328361 provision.go:87] duration metric: took 336.813011ms to configureAuth
	I1205 21:07:21.040154  328361 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:07:21.040380  328361 config.go:182] Loaded profile config "multinode-784478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:07:21.040463  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:07:21.043437  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:21.043830  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:07:21.043867  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:07:21.044007  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:07:21.044240  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:21.044442  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:07:21.044638  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:07:21.044823  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:07:21.045052  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:07:21.045084  328361 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:08:51.750999  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:08:51.751038  328361 machine.go:96] duration metric: took 1m31.385355738s to provisionDockerMachine
	I1205 21:08:51.751057  328361 start.go:293] postStartSetup for "multinode-784478" (driver="kvm2")
	I1205 21:08:51.751082  328361 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:08:51.751115  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.751536  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:08:51.751567  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.755471  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.755922  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.755945  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.756171  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.756424  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.756621  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.756793  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:51.837314  328361 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:08:51.842037  328361 command_runner.go:130] > NAME=Buildroot
	I1205 21:08:51.842074  328361 command_runner.go:130] > VERSION=2023.02.9-dirty
	I1205 21:08:51.842081  328361 command_runner.go:130] > ID=buildroot
	I1205 21:08:51.842087  328361 command_runner.go:130] > VERSION_ID=2023.02.9
	I1205 21:08:51.842092  328361 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I1205 21:08:51.842368  328361 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:08:51.842407  328361 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:08:51.842494  328361 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:08:51.842587  328361 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:08:51.842599  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /etc/ssl/certs/3007652.pem
	I1205 21:08:51.842713  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:08:51.852894  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:08:51.878715  328361 start.go:296] duration metric: took 127.625908ms for postStartSetup
	I1205 21:08:51.878785  328361 fix.go:56] duration metric: took 1m31.536844462s for fixHost
	I1205 21:08:51.878826  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.881995  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.882389  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.882426  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.882655  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.882940  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.883147  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.883386  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.883560  328361 main.go:141] libmachine: Using SSH client type: native
	I1205 21:08:51.883788  328361 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.221 22 <nil> <nil>}
	I1205 21:08:51.883802  328361 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:08:51.983007  328361 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733432931.960646715
	
	I1205 21:08:51.983034  328361 fix.go:216] guest clock: 1733432931.960646715
	I1205 21:08:51.983045  328361 fix.go:229] Guest: 2024-12-05 21:08:51.960646715 +0000 UTC Remote: 2024-12-05 21:08:51.878792101 +0000 UTC m=+91.682814268 (delta=81.854614ms)
	I1205 21:08:51.983085  328361 fix.go:200] guest clock delta is within tolerance: 81.854614ms
	I1205 21:08:51.983092  328361 start.go:83] releasing machines lock for "multinode-784478", held for 1m31.641172949s
	I1205 21:08:51.983115  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.983448  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:08:51.986634  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.987008  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.987036  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.987278  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.987880  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.988118  328361 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:08:51.988209  328361 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:08:51.988272  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.988421  328361 ssh_runner.go:195] Run: cat /version.json
	I1205 21:08:51.988445  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:08:51.991155  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991405  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991556  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.991594  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991743  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.991846  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:08:51.991871  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:08:51.991930  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.992016  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:08:51.992103  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.992160  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:08:51.992225  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:51.992257  328361 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:08:51.992387  328361 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:08:52.092962  328361 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I1205 21:08:52.093565  328361 command_runner.go:130] > {"iso_version": "v1.34.0-1730913550-19917", "kicbase_version": "v0.0.45-1730888964-19917", "minikube_version": "v1.34.0", "commit": "72f43dde5d92c8ae490d0727dad53fb3ed6aa41e"}
	I1205 21:08:52.093755  328361 ssh_runner.go:195] Run: systemctl --version
	I1205 21:08:52.099794  328361 command_runner.go:130] > systemd 252 (252)
	I1205 21:08:52.099857  328361 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I1205 21:08:52.100010  328361 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:08:52.250629  328361 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1205 21:08:52.259587  328361 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1205 21:08:52.259653  328361 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:08:52.259716  328361 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:08:52.270297  328361 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 21:08:52.270330  328361 start.go:495] detecting cgroup driver to use...
	I1205 21:08:52.270409  328361 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:08:52.287589  328361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:08:52.302176  328361 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:08:52.302261  328361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:08:52.316296  328361 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:08:52.330708  328361 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:08:52.483558  328361 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:08:52.689242  328361 docker.go:233] disabling docker service ...
	I1205 21:08:52.689343  328361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:08:52.714808  328361 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:08:52.736730  328361 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:08:52.938849  328361 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:08:53.097860  328361 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:08:53.112496  328361 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:08:53.131756  328361 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I1205 21:08:53.131822  328361 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:08:53.131881  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.142932  328361 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:08:53.143029  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.154022  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.164767  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.175656  328361 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:08:53.186854  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.197394  328361 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.208344  328361 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:08:53.218921  328361 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:08:53.229322  328361 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1205 21:08:53.229413  328361 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:08:53.239421  328361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:08:53.379606  328361 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:09:03.130509  328361 ssh_runner.go:235] Completed: sudo systemctl restart crio: (9.750854387s)
	I1205 21:09:03.130559  328361 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:09:03.130624  328361 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:09:03.135644  328361 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I1205 21:09:03.135682  328361 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1205 21:09:03.135692  328361 command_runner.go:130] > Device: 0,22	Inode: 1351        Links: 1
	I1205 21:09:03.135701  328361 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 21:09:03.135706  328361 command_runner.go:130] > Access: 2024-12-05 21:09:02.992248644 +0000
	I1205 21:09:03.135722  328361 command_runner.go:130] > Modify: 2024-12-05 21:09:02.953246299 +0000
	I1205 21:09:03.135734  328361 command_runner.go:130] > Change: 2024-12-05 21:09:02.953246299 +0000
	I1205 21:09:03.135741  328361 command_runner.go:130] >  Birth: -
	I1205 21:09:03.135795  328361 start.go:563] Will wait 60s for crictl version
	I1205 21:09:03.135859  328361 ssh_runner.go:195] Run: which crictl
	I1205 21:09:03.139750  328361 command_runner.go:130] > /usr/bin/crictl
	I1205 21:09:03.139860  328361 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:09:03.174786  328361 command_runner.go:130] > Version:  0.1.0
	I1205 21:09:03.174817  328361 command_runner.go:130] > RuntimeName:  cri-o
	I1205 21:09:03.174823  328361 command_runner.go:130] > RuntimeVersion:  1.29.1
	I1205 21:09:03.174831  328361 command_runner.go:130] > RuntimeApiVersion:  v1
	I1205 21:09:03.175864  328361 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:09:03.175976  328361 ssh_runner.go:195] Run: crio --version
	I1205 21:09:03.203090  328361 command_runner.go:130] > crio version 1.29.1
	I1205 21:09:03.203119  328361 command_runner.go:130] > Version:        1.29.1
	I1205 21:09:03.203127  328361 command_runner.go:130] > GitCommit:      unknown
	I1205 21:09:03.203134  328361 command_runner.go:130] > GitCommitDate:  unknown
	I1205 21:09:03.203140  328361 command_runner.go:130] > GitTreeState:   clean
	I1205 21:09:03.203148  328361 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 21:09:03.203154  328361 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 21:09:03.203169  328361 command_runner.go:130] > Compiler:       gc
	I1205 21:09:03.203175  328361 command_runner.go:130] > Platform:       linux/amd64
	I1205 21:09:03.203180  328361 command_runner.go:130] > Linkmode:       dynamic
	I1205 21:09:03.203187  328361 command_runner.go:130] > BuildTags:      
	I1205 21:09:03.203193  328361 command_runner.go:130] >   containers_image_ostree_stub
	I1205 21:09:03.203200  328361 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 21:09:03.203206  328361 command_runner.go:130] >   btrfs_noversion
	I1205 21:09:03.203214  328361 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 21:09:03.203225  328361 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 21:09:03.203234  328361 command_runner.go:130] >   seccomp
	I1205 21:09:03.203241  328361 command_runner.go:130] > LDFlags:          unknown
	I1205 21:09:03.203248  328361 command_runner.go:130] > SeccompEnabled:   true
	I1205 21:09:03.203255  328361 command_runner.go:130] > AppArmorEnabled:  false
	I1205 21:09:03.204418  328361 ssh_runner.go:195] Run: crio --version
	I1205 21:09:03.233671  328361 command_runner.go:130] > crio version 1.29.1
	I1205 21:09:03.233701  328361 command_runner.go:130] > Version:        1.29.1
	I1205 21:09:03.233707  328361 command_runner.go:130] > GitCommit:      unknown
	I1205 21:09:03.233711  328361 command_runner.go:130] > GitCommitDate:  unknown
	I1205 21:09:03.233716  328361 command_runner.go:130] > GitTreeState:   clean
	I1205 21:09:03.233722  328361 command_runner.go:130] > BuildDate:      2024-11-06T23:09:37Z
	I1205 21:09:03.233726  328361 command_runner.go:130] > GoVersion:      go1.21.6
	I1205 21:09:03.233731  328361 command_runner.go:130] > Compiler:       gc
	I1205 21:09:03.233739  328361 command_runner.go:130] > Platform:       linux/amd64
	I1205 21:09:03.233744  328361 command_runner.go:130] > Linkmode:       dynamic
	I1205 21:09:03.233751  328361 command_runner.go:130] > BuildTags:      
	I1205 21:09:03.233758  328361 command_runner.go:130] >   containers_image_ostree_stub
	I1205 21:09:03.233765  328361 command_runner.go:130] >   exclude_graphdriver_btrfs
	I1205 21:09:03.233772  328361 command_runner.go:130] >   btrfs_noversion
	I1205 21:09:03.233779  328361 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I1205 21:09:03.233790  328361 command_runner.go:130] >   libdm_no_deferred_remove
	I1205 21:09:03.233796  328361 command_runner.go:130] >   seccomp
	I1205 21:09:03.233803  328361 command_runner.go:130] > LDFlags:          unknown
	I1205 21:09:03.233810  328361 command_runner.go:130] > SeccompEnabled:   true
	I1205 21:09:03.233820  328361 command_runner.go:130] > AppArmorEnabled:  false
	I1205 21:09:03.236923  328361 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:09:03.238479  328361 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:09:03.241924  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:09:03.242193  328361 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:09:03.242226  328361 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:09:03.242523  328361 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:09:03.246805  328361 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I1205 21:09:03.246924  328361 kubeadm.go:883] updating cluster {Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingr
ess-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMi
rror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:09:03.247074  328361 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:09:03.247116  328361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:09:03.290202  328361 command_runner.go:130] > {
	I1205 21:09:03.290238  328361 command_runner.go:130] >   "images": [
	I1205 21:09:03.290244  328361 command_runner.go:130] >     {
	I1205 21:09:03.290256  328361 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 21:09:03.290264  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290274  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 21:09:03.290280  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290287  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290314  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 21:09:03.290331  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 21:09:03.290336  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290342  328361 command_runner.go:130] >       "size": "94965812",
	I1205 21:09:03.290346  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290350  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290363  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290375  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290383  328361 command_runner.go:130] >     },
	I1205 21:09:03.290392  328361 command_runner.go:130] >     {
	I1205 21:09:03.290403  328361 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 21:09:03.290412  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290421  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 21:09:03.290428  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290434  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290448  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 21:09:03.290462  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 21:09:03.290468  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290472  328361 command_runner.go:130] >       "size": "94958644",
	I1205 21:09:03.290477  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290486  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290490  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290495  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290498  328361 command_runner.go:130] >     },
	I1205 21:09:03.290501  328361 command_runner.go:130] >     {
	I1205 21:09:03.290508  328361 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 21:09:03.290515  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290521  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 21:09:03.290539  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290546  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290553  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 21:09:03.290561  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 21:09:03.290565  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290570  328361 command_runner.go:130] >       "size": "1363676",
	I1205 21:09:03.290577  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290581  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290585  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290592  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290595  328361 command_runner.go:130] >     },
	I1205 21:09:03.290599  328361 command_runner.go:130] >     {
	I1205 21:09:03.290607  328361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 21:09:03.290611  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290618  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 21:09:03.290624  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290628  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290637  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 21:09:03.290649  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 21:09:03.290656  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290661  328361 command_runner.go:130] >       "size": "31470524",
	I1205 21:09:03.290667  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290670  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290674  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290680  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290684  328361 command_runner.go:130] >     },
	I1205 21:09:03.290689  328361 command_runner.go:130] >     {
	I1205 21:09:03.290695  328361 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 21:09:03.290702  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290707  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 21:09:03.290710  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290714  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290721  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 21:09:03.290730  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 21:09:03.290733  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290737  328361 command_runner.go:130] >       "size": "63273227",
	I1205 21:09:03.290741  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.290746  328361 command_runner.go:130] >       "username": "nonroot",
	I1205 21:09:03.290750  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290753  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290757  328361 command_runner.go:130] >     },
	I1205 21:09:03.290762  328361 command_runner.go:130] >     {
	I1205 21:09:03.290768  328361 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 21:09:03.290774  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290779  328361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 21:09:03.290782  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290788  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290795  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 21:09:03.290801  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 21:09:03.290807  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290811  328361 command_runner.go:130] >       "size": "149009664",
	I1205 21:09:03.290817  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290822  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.290830  328361 command_runner.go:130] >       },
	I1205 21:09:03.290833  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290837  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290841  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290845  328361 command_runner.go:130] >     },
	I1205 21:09:03.290848  328361 command_runner.go:130] >     {
	I1205 21:09:03.290853  328361 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 21:09:03.290860  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290865  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 21:09:03.290870  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290874  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290882  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 21:09:03.290891  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 21:09:03.290895  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290899  328361 command_runner.go:130] >       "size": "95274464",
	I1205 21:09:03.290903  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290907  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.290910  328361 command_runner.go:130] >       },
	I1205 21:09:03.290914  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.290918  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.290924  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.290927  328361 command_runner.go:130] >     },
	I1205 21:09:03.290931  328361 command_runner.go:130] >     {
	I1205 21:09:03.290936  328361 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 21:09:03.290942  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.290947  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 21:09:03.290950  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290957  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.290972  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 21:09:03.290982  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 21:09:03.290986  328361 command_runner.go:130] >       ],
	I1205 21:09:03.290990  328361 command_runner.go:130] >       "size": "89474374",
	I1205 21:09:03.290994  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.290999  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.291002  328361 command_runner.go:130] >       },
	I1205 21:09:03.291006  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291009  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291013  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291016  328361 command_runner.go:130] >     },
	I1205 21:09:03.291019  328361 command_runner.go:130] >     {
	I1205 21:09:03.291025  328361 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 21:09:03.291029  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291033  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 21:09:03.291036  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291044  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291051  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 21:09:03.291058  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 21:09:03.291061  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291065  328361 command_runner.go:130] >       "size": "92783513",
	I1205 21:09:03.291068  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.291072  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291075  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291079  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291083  328361 command_runner.go:130] >     },
	I1205 21:09:03.291088  328361 command_runner.go:130] >     {
	I1205 21:09:03.291109  328361 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 21:09:03.291119  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291124  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 21:09:03.291128  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291132  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291138  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 21:09:03.291145  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 21:09:03.291151  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291155  328361 command_runner.go:130] >       "size": "68457798",
	I1205 21:09:03.291159  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.291163  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.291168  328361 command_runner.go:130] >       },
	I1205 21:09:03.291172  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291176  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291182  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.291185  328361 command_runner.go:130] >     },
	I1205 21:09:03.291188  328361 command_runner.go:130] >     {
	I1205 21:09:03.291194  328361 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 21:09:03.291239  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.291279  328361 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 21:09:03.291288  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291295  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.291310  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 21:09:03.291322  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 21:09:03.291331  328361 command_runner.go:130] >       ],
	I1205 21:09:03.291337  328361 command_runner.go:130] >       "size": "742080",
	I1205 21:09:03.291347  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.291367  328361 command_runner.go:130] >         "value": "65535"
	I1205 21:09:03.291377  328361 command_runner.go:130] >       },
	I1205 21:09:03.291383  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.291391  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.291398  328361 command_runner.go:130] >       "pinned": true
	I1205 21:09:03.291407  328361 command_runner.go:130] >     }
	I1205 21:09:03.291412  328361 command_runner.go:130] >   ]
	I1205 21:09:03.291419  328361 command_runner.go:130] > }
	I1205 21:09:03.291643  328361 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:09:03.291655  328361 crio.go:433] Images already preloaded, skipping extraction
	I1205 21:09:03.291712  328361 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:09:03.324462  328361 command_runner.go:130] > {
	I1205 21:09:03.324497  328361 command_runner.go:130] >   "images": [
	I1205 21:09:03.324504  328361 command_runner.go:130] >     {
	I1205 21:09:03.324515  328361 command_runner.go:130] >       "id": "3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52",
	I1205 21:09:03.324522  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324546  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241007-36f62932"
	I1205 21:09:03.324553  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324559  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324572  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387",
	I1205 21:09:03.324583  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"
	I1205 21:09:03.324589  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324597  328361 command_runner.go:130] >       "size": "94965812",
	I1205 21:09:03.324607  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324613  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324627  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324631  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324635  328361 command_runner.go:130] >     },
	I1205 21:09:03.324638  328361 command_runner.go:130] >     {
	I1205 21:09:03.324644  328361 command_runner.go:130] >       "id": "9ca7e41918271bb074bb20850743fd9455129b071204789f09fa2b7304d7fad5",
	I1205 21:09:03.324651  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324655  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20241023-a345ebe4"
	I1205 21:09:03.324659  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324666  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324673  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:cddd34f7d74bf898f14080ed61e322a492689043dae46e93106c013373d68d16",
	I1205 21:09:03.324680  328361 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:e39a44bd13d0b4532d0436a1c2fafdd1a8c57fb327770004098162f0bb96132d"
	I1205 21:09:03.324686  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324691  328361 command_runner.go:130] >       "size": "94958644",
	I1205 21:09:03.324694  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324700  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324706  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324710  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324713  328361 command_runner.go:130] >     },
	I1205 21:09:03.324719  328361 command_runner.go:130] >     {
	I1205 21:09:03.324727  328361 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I1205 21:09:03.324733  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324738  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I1205 21:09:03.324742  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324748  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324755  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I1205 21:09:03.324762  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I1205 21:09:03.324769  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324773  328361 command_runner.go:130] >       "size": "1363676",
	I1205 21:09:03.324777  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324781  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324796  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324802  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324805  328361 command_runner.go:130] >     },
	I1205 21:09:03.324808  328361 command_runner.go:130] >     {
	I1205 21:09:03.324814  328361 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I1205 21:09:03.324819  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324825  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I1205 21:09:03.324828  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324832  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324840  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I1205 21:09:03.324853  328361 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I1205 21:09:03.324860  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324864  328361 command_runner.go:130] >       "size": "31470524",
	I1205 21:09:03.324868  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324872  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.324876  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324882  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324885  328361 command_runner.go:130] >     },
	I1205 21:09:03.324889  328361 command_runner.go:130] >     {
	I1205 21:09:03.324894  328361 command_runner.go:130] >       "id": "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6",
	I1205 21:09:03.324900  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324905  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.11.3"
	I1205 21:09:03.324909  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324914  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324921  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e",
	I1205 21:09:03.324930  328361 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"
	I1205 21:09:03.324935  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324941  328361 command_runner.go:130] >       "size": "63273227",
	I1205 21:09:03.324945  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.324950  328361 command_runner.go:130] >       "username": "nonroot",
	I1205 21:09:03.324954  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.324958  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.324961  328361 command_runner.go:130] >     },
	I1205 21:09:03.324967  328361 command_runner.go:130] >     {
	I1205 21:09:03.324973  328361 command_runner.go:130] >       "id": "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4",
	I1205 21:09:03.324979  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.324983  328361 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.15-0"
	I1205 21:09:03.324987  328361 command_runner.go:130] >       ],
	I1205 21:09:03.324991  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.324998  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d",
	I1205 21:09:03.325006  328361 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"
	I1205 21:09:03.325010  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325017  328361 command_runner.go:130] >       "size": "149009664",
	I1205 21:09:03.325020  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325024  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325030  328361 command_runner.go:130] >       },
	I1205 21:09:03.325038  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325041  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325045  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325051  328361 command_runner.go:130] >     },
	I1205 21:09:03.325054  328361 command_runner.go:130] >     {
	I1205 21:09:03.325060  328361 command_runner.go:130] >       "id": "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173",
	I1205 21:09:03.325065  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325070  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.31.2"
	I1205 21:09:03.325074  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325078  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325086  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0",
	I1205 21:09:03.325093  328361 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"
	I1205 21:09:03.325096  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325100  328361 command_runner.go:130] >       "size": "95274464",
	I1205 21:09:03.325104  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325108  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325111  328361 command_runner.go:130] >       },
	I1205 21:09:03.325115  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325121  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325125  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325129  328361 command_runner.go:130] >     },
	I1205 21:09:03.325131  328361 command_runner.go:130] >     {
	I1205 21:09:03.325137  328361 command_runner.go:130] >       "id": "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503",
	I1205 21:09:03.325143  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325149  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.31.2"
	I1205 21:09:03.325153  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325159  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325185  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c",
	I1205 21:09:03.325202  328361 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"
	I1205 21:09:03.325207  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325215  328361 command_runner.go:130] >       "size": "89474374",
	I1205 21:09:03.325224  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325230  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325237  328361 command_runner.go:130] >       },
	I1205 21:09:03.325244  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325250  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325260  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325266  328361 command_runner.go:130] >     },
	I1205 21:09:03.325274  328361 command_runner.go:130] >     {
	I1205 21:09:03.325284  328361 command_runner.go:130] >       "id": "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38",
	I1205 21:09:03.325293  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325301  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.31.2"
	I1205 21:09:03.325310  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325318  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325325  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b",
	I1205 21:09:03.325337  328361 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"
	I1205 21:09:03.325343  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325350  328361 command_runner.go:130] >       "size": "92783513",
	I1205 21:09:03.325359  328361 command_runner.go:130] >       "uid": null,
	I1205 21:09:03.325366  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325376  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325382  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325391  328361 command_runner.go:130] >     },
	I1205 21:09:03.325397  328361 command_runner.go:130] >     {
	I1205 21:09:03.325411  328361 command_runner.go:130] >       "id": "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856",
	I1205 21:09:03.325418  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325426  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.31.2"
	I1205 21:09:03.325434  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325441  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325455  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282",
	I1205 21:09:03.325465  328361 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"
	I1205 21:09:03.325469  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325473  328361 command_runner.go:130] >       "size": "68457798",
	I1205 21:09:03.325477  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325481  328361 command_runner.go:130] >         "value": "0"
	I1205 21:09:03.325484  328361 command_runner.go:130] >       },
	I1205 21:09:03.325488  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325492  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325498  328361 command_runner.go:130] >       "pinned": false
	I1205 21:09:03.325507  328361 command_runner.go:130] >     },
	I1205 21:09:03.325513  328361 command_runner.go:130] >     {
	I1205 21:09:03.325526  328361 command_runner.go:130] >       "id": "873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136",
	I1205 21:09:03.325543  328361 command_runner.go:130] >       "repoTags": [
	I1205 21:09:03.325554  328361 command_runner.go:130] >         "registry.k8s.io/pause:3.10"
	I1205 21:09:03.325561  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325570  328361 command_runner.go:130] >       "repoDigests": [
	I1205 21:09:03.325581  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a",
	I1205 21:09:03.325590  328361 command_runner.go:130] >         "registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"
	I1205 21:09:03.325594  328361 command_runner.go:130] >       ],
	I1205 21:09:03.325627  328361 command_runner.go:130] >       "size": "742080",
	I1205 21:09:03.325650  328361 command_runner.go:130] >       "uid": {
	I1205 21:09:03.325654  328361 command_runner.go:130] >         "value": "65535"
	I1205 21:09:03.325657  328361 command_runner.go:130] >       },
	I1205 21:09:03.325661  328361 command_runner.go:130] >       "username": "",
	I1205 21:09:03.325665  328361 command_runner.go:130] >       "spec": null,
	I1205 21:09:03.325669  328361 command_runner.go:130] >       "pinned": true
	I1205 21:09:03.325673  328361 command_runner.go:130] >     }
	I1205 21:09:03.325676  328361 command_runner.go:130] >   ]
	I1205 21:09:03.325679  328361 command_runner.go:130] > }
	I1205 21:09:03.325811  328361 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:09:03.325823  328361 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:09:03.325831  328361 kubeadm.go:934] updating node { 192.168.39.221 8443 v1.31.2 crio true true} ...
	I1205 21:09:03.325962  328361 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-784478 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.221
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:09:03.326038  328361 ssh_runner.go:195] Run: crio config
	I1205 21:09:03.366903  328361 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I1205 21:09:03.366950  328361 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I1205 21:09:03.366961  328361 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I1205 21:09:03.366967  328361 command_runner.go:130] > #
	I1205 21:09:03.366980  328361 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I1205 21:09:03.366990  328361 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I1205 21:09:03.367001  328361 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I1205 21:09:03.367012  328361 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I1205 21:09:03.367019  328361 command_runner.go:130] > # reload'.
	I1205 21:09:03.367028  328361 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I1205 21:09:03.367043  328361 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I1205 21:09:03.367057  328361 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I1205 21:09:03.367068  328361 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I1205 21:09:03.367077  328361 command_runner.go:130] > [crio]
	I1205 21:09:03.367089  328361 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I1205 21:09:03.367115  328361 command_runner.go:130] > # containers images, in this directory.
	I1205 21:09:03.367223  328361 command_runner.go:130] > root = "/var/lib/containers/storage"
	I1205 21:09:03.367254  328361 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I1205 21:09:03.367262  328361 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I1205 21:09:03.367274  328361 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I1205 21:09:03.367282  328361 command_runner.go:130] > # imagestore = ""
	I1205 21:09:03.367293  328361 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I1205 21:09:03.367310  328361 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I1205 21:09:03.367318  328361 command_runner.go:130] > storage_driver = "overlay"
	I1205 21:09:03.367328  328361 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I1205 21:09:03.367340  328361 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I1205 21:09:03.367347  328361 command_runner.go:130] > storage_option = [
	I1205 21:09:03.367658  328361 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I1205 21:09:03.367669  328361 command_runner.go:130] > ]
	I1205 21:09:03.367675  328361 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I1205 21:09:03.367692  328361 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I1205 21:09:03.367701  328361 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I1205 21:09:03.367708  328361 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I1205 21:09:03.367718  328361 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I1205 21:09:03.367726  328361 command_runner.go:130] > # always happen on a node reboot
	I1205 21:09:03.367736  328361 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I1205 21:09:03.367757  328361 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I1205 21:09:03.367766  328361 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I1205 21:09:03.367771  328361 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I1205 21:09:03.367776  328361 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I1205 21:09:03.367783  328361 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I1205 21:09:03.367793  328361 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I1205 21:09:03.367802  328361 command_runner.go:130] > # internal_wipe = true
	I1205 21:09:03.367817  328361 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I1205 21:09:03.367829  328361 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I1205 21:09:03.367839  328361 command_runner.go:130] > # internal_repair = false
	I1205 21:09:03.367848  328361 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I1205 21:09:03.367860  328361 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I1205 21:09:03.367872  328361 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I1205 21:09:03.367879  328361 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I1205 21:09:03.367888  328361 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I1205 21:09:03.367898  328361 command_runner.go:130] > [crio.api]
	I1205 21:09:03.367907  328361 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I1205 21:09:03.367921  328361 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I1205 21:09:03.367932  328361 command_runner.go:130] > # IP address on which the stream server will listen.
	I1205 21:09:03.367938  328361 command_runner.go:130] > # stream_address = "127.0.0.1"
	I1205 21:09:03.367951  328361 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I1205 21:09:03.367963  328361 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I1205 21:09:03.367970  328361 command_runner.go:130] > # stream_port = "0"
	I1205 21:09:03.367982  328361 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I1205 21:09:03.367997  328361 command_runner.go:130] > # stream_enable_tls = false
	I1205 21:09:03.368007  328361 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I1205 21:09:03.368020  328361 command_runner.go:130] > # stream_idle_timeout = ""
	I1205 21:09:03.368030  328361 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I1205 21:09:03.368043  328361 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I1205 21:09:03.368052  328361 command_runner.go:130] > # minutes.
	I1205 21:09:03.368059  328361 command_runner.go:130] > # stream_tls_cert = ""
	I1205 21:09:03.368076  328361 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I1205 21:09:03.368090  328361 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I1205 21:09:03.368097  328361 command_runner.go:130] > # stream_tls_key = ""
	I1205 21:09:03.368110  328361 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I1205 21:09:03.368122  328361 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I1205 21:09:03.368139  328361 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I1205 21:09:03.368146  328361 command_runner.go:130] > # stream_tls_ca = ""
	I1205 21:09:03.368156  328361 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 21:09:03.368166  328361 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I1205 21:09:03.368178  328361 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I1205 21:09:03.368189  328361 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I1205 21:09:03.368198  328361 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I1205 21:09:03.368210  328361 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I1205 21:09:03.368218  328361 command_runner.go:130] > [crio.runtime]
	I1205 21:09:03.368232  328361 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I1205 21:09:03.368244  328361 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I1205 21:09:03.368254  328361 command_runner.go:130] > # "nofile=1024:2048"
	I1205 21:09:03.368264  328361 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I1205 21:09:03.368273  328361 command_runner.go:130] > # default_ulimits = [
	I1205 21:09:03.368279  328361 command_runner.go:130] > # ]
	I1205 21:09:03.368292  328361 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I1205 21:09:03.368301  328361 command_runner.go:130] > # no_pivot = false
	I1205 21:09:03.368310  328361 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I1205 21:09:03.368320  328361 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I1205 21:09:03.368328  328361 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I1205 21:09:03.368342  328361 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I1205 21:09:03.368353  328361 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I1205 21:09:03.368365  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 21:09:03.368376  328361 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I1205 21:09:03.368383  328361 command_runner.go:130] > # Cgroup setting for conmon
	I1205 21:09:03.368398  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I1205 21:09:03.368409  328361 command_runner.go:130] > conmon_cgroup = "pod"
	I1205 21:09:03.368423  328361 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I1205 21:09:03.368435  328361 command_runner.go:130] > # environment variables to conmon or the runtime.
	I1205 21:09:03.368445  328361 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I1205 21:09:03.368454  328361 command_runner.go:130] > conmon_env = [
	I1205 21:09:03.368463  328361 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 21:09:03.368477  328361 command_runner.go:130] > ]
	I1205 21:09:03.368489  328361 command_runner.go:130] > # Additional environment variables to set for all the
	I1205 21:09:03.368499  328361 command_runner.go:130] > # containers. These are overridden if set in the
	I1205 21:09:03.368524  328361 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I1205 21:09:03.368533  328361 command_runner.go:130] > # default_env = [
	I1205 21:09:03.368539  328361 command_runner.go:130] > # ]
	I1205 21:09:03.368551  328361 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I1205 21:09:03.368561  328361 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I1205 21:09:03.368568  328361 command_runner.go:130] > # selinux = false
	I1205 21:09:03.368574  328361 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I1205 21:09:03.368580  328361 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I1205 21:09:03.368587  328361 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I1205 21:09:03.368593  328361 command_runner.go:130] > # seccomp_profile = ""
	I1205 21:09:03.368605  328361 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I1205 21:09:03.368622  328361 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I1205 21:09:03.368635  328361 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I1205 21:09:03.368647  328361 command_runner.go:130] > # which might increase security.
	I1205 21:09:03.368655  328361 command_runner.go:130] > # This option is currently deprecated,
	I1205 21:09:03.368667  328361 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I1205 21:09:03.368675  328361 command_runner.go:130] > seccomp_use_default_when_empty = false
	I1205 21:09:03.368688  328361 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I1205 21:09:03.368702  328361 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I1205 21:09:03.368715  328361 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I1205 21:09:03.368729  328361 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I1205 21:09:03.368740  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.368751  328361 command_runner.go:130] > # apparmor_profile = "crio-default"
	I1205 21:09:03.368765  328361 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I1205 21:09:03.368773  328361 command_runner.go:130] > # the cgroup blockio controller.
	I1205 21:09:03.368782  328361 command_runner.go:130] > # blockio_config_file = ""
	I1205 21:09:03.368792  328361 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I1205 21:09:03.368800  328361 command_runner.go:130] > # blockio parameters.
	I1205 21:09:03.368807  328361 command_runner.go:130] > # blockio_reload = false
	I1205 21:09:03.368819  328361 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I1205 21:09:03.368828  328361 command_runner.go:130] > # irqbalance daemon.
	I1205 21:09:03.368837  328361 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I1205 21:09:03.368850  328361 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I1205 21:09:03.368866  328361 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I1205 21:09:03.368877  328361 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I1205 21:09:03.368898  328361 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I1205 21:09:03.368912  328361 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I1205 21:09:03.368926  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.368936  328361 command_runner.go:130] > # rdt_config_file = ""
	I1205 21:09:03.368945  328361 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I1205 21:09:03.368960  328361 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I1205 21:09:03.368982  328361 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I1205 21:09:03.368993  328361 command_runner.go:130] > # separate_pull_cgroup = ""
	I1205 21:09:03.369004  328361 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I1205 21:09:03.369017  328361 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I1205 21:09:03.369027  328361 command_runner.go:130] > # will be added.
	I1205 21:09:03.369034  328361 command_runner.go:130] > # default_capabilities = [
	I1205 21:09:03.369043  328361 command_runner.go:130] > # 	"CHOWN",
	I1205 21:09:03.369049  328361 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I1205 21:09:03.369060  328361 command_runner.go:130] > # 	"FSETID",
	I1205 21:09:03.369066  328361 command_runner.go:130] > # 	"FOWNER",
	I1205 21:09:03.369074  328361 command_runner.go:130] > # 	"SETGID",
	I1205 21:09:03.369080  328361 command_runner.go:130] > # 	"SETUID",
	I1205 21:09:03.369089  328361 command_runner.go:130] > # 	"SETPCAP",
	I1205 21:09:03.369100  328361 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I1205 21:09:03.369109  328361 command_runner.go:130] > # 	"KILL",
	I1205 21:09:03.369117  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369132  328361 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I1205 21:09:03.369147  328361 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I1205 21:09:03.369158  328361 command_runner.go:130] > # add_inheritable_capabilities = false
	I1205 21:09:03.369170  328361 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I1205 21:09:03.369183  328361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 21:09:03.369193  328361 command_runner.go:130] > default_sysctls = [
	I1205 21:09:03.369200  328361 command_runner.go:130] > 	"net.ipv4.ip_unprivileged_port_start=0",
	I1205 21:09:03.369208  328361 command_runner.go:130] > ]
	I1205 21:09:03.369216  328361 command_runner.go:130] > # List of devices on the host that a
	I1205 21:09:03.369229  328361 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I1205 21:09:03.369237  328361 command_runner.go:130] > # allowed_devices = [
	I1205 21:09:03.369241  328361 command_runner.go:130] > # 	"/dev/fuse",
	I1205 21:09:03.369244  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369249  328361 command_runner.go:130] > # List of additional devices. specified as
	I1205 21:09:03.369261  328361 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I1205 21:09:03.369272  328361 command_runner.go:130] > # If it is empty or commented out, only the devices
	I1205 21:09:03.369281  328361 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I1205 21:09:03.369290  328361 command_runner.go:130] > # additional_devices = [
	I1205 21:09:03.369296  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369308  328361 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I1205 21:09:03.369322  328361 command_runner.go:130] > # cdi_spec_dirs = [
	I1205 21:09:03.369328  328361 command_runner.go:130] > # 	"/etc/cdi",
	I1205 21:09:03.369337  328361 command_runner.go:130] > # 	"/var/run/cdi",
	I1205 21:09:03.369342  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369352  328361 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I1205 21:09:03.369365  328361 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I1205 21:09:03.369375  328361 command_runner.go:130] > # Defaults to false.
	I1205 21:09:03.369384  328361 command_runner.go:130] > # device_ownership_from_security_context = false
	I1205 21:09:03.369396  328361 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I1205 21:09:03.369409  328361 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I1205 21:09:03.369413  328361 command_runner.go:130] > # hooks_dir = [
	I1205 21:09:03.369419  328361 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I1205 21:09:03.369424  328361 command_runner.go:130] > # ]
	I1205 21:09:03.369432  328361 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I1205 21:09:03.369442  328361 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I1205 21:09:03.369454  328361 command_runner.go:130] > # its default mounts from the following two files:
	I1205 21:09:03.369459  328361 command_runner.go:130] > #
	I1205 21:09:03.369469  328361 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I1205 21:09:03.369482  328361 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I1205 21:09:03.369494  328361 command_runner.go:130] > #      override the default mounts shipped with the package.
	I1205 21:09:03.369503  328361 command_runner.go:130] > #
	I1205 21:09:03.369519  328361 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I1205 21:09:03.369532  328361 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I1205 21:09:03.369546  328361 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I1205 21:09:03.369557  328361 command_runner.go:130] > #      only add mounts it finds in this file.
	I1205 21:09:03.369562  328361 command_runner.go:130] > #
	I1205 21:09:03.369573  328361 command_runner.go:130] > # default_mounts_file = ""
	I1205 21:09:03.369585  328361 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I1205 21:09:03.369599  328361 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I1205 21:09:03.369608  328361 command_runner.go:130] > pids_limit = 1024
	I1205 21:09:03.369619  328361 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I1205 21:09:03.369632  328361 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I1205 21:09:03.369643  328361 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I1205 21:09:03.369659  328361 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I1205 21:09:03.369668  328361 command_runner.go:130] > # log_size_max = -1
	I1205 21:09:03.369679  328361 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I1205 21:09:03.369689  328361 command_runner.go:130] > # log_to_journald = false
	I1205 21:09:03.369699  328361 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I1205 21:09:03.369710  328361 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I1205 21:09:03.369724  328361 command_runner.go:130] > # Path to directory for container attach sockets.
	I1205 21:09:03.369735  328361 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I1205 21:09:03.369744  328361 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I1205 21:09:03.369753  328361 command_runner.go:130] > # bind_mount_prefix = ""
	I1205 21:09:03.369762  328361 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I1205 21:09:03.369770  328361 command_runner.go:130] > # read_only = false
	I1205 21:09:03.369783  328361 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I1205 21:09:03.369796  328361 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I1205 21:09:03.369806  328361 command_runner.go:130] > # live configuration reload.
	I1205 21:09:03.369813  328361 command_runner.go:130] > # log_level = "info"
	I1205 21:09:03.369826  328361 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I1205 21:09:03.369835  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.369843  328361 command_runner.go:130] > # log_filter = ""
	I1205 21:09:03.369849  328361 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I1205 21:09:03.369855  328361 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I1205 21:09:03.369859  328361 command_runner.go:130] > # separated by comma.
	I1205 21:09:03.369866  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369873  328361 command_runner.go:130] > # uid_mappings = ""
	I1205 21:09:03.369879  328361 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I1205 21:09:03.369885  328361 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I1205 21:09:03.369888  328361 command_runner.go:130] > # separated by comma.
	I1205 21:09:03.369895  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369921  328361 command_runner.go:130] > # gid_mappings = ""
	I1205 21:09:03.369934  328361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I1205 21:09:03.369946  328361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 21:09:03.369959  328361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 21:09:03.369974  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.369984  328361 command_runner.go:130] > # minimum_mappable_uid = -1
	I1205 21:09:03.369994  328361 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I1205 21:09:03.370007  328361 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I1205 21:09:03.370020  328361 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I1205 21:09:03.370032  328361 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I1205 21:09:03.370042  328361 command_runner.go:130] > # minimum_mappable_gid = -1
	I1205 21:09:03.370051  328361 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I1205 21:09:03.370063  328361 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I1205 21:09:03.370078  328361 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I1205 21:09:03.370091  328361 command_runner.go:130] > # ctr_stop_timeout = 30
	I1205 21:09:03.370103  328361 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I1205 21:09:03.370115  328361 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I1205 21:09:03.370129  328361 command_runner.go:130] > # a kernel separating runtime (like kata).
	I1205 21:09:03.370137  328361 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I1205 21:09:03.370146  328361 command_runner.go:130] > drop_infra_ctr = false
	I1205 21:09:03.370157  328361 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I1205 21:09:03.370169  328361 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I1205 21:09:03.370182  328361 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I1205 21:09:03.370186  328361 command_runner.go:130] > # infra_ctr_cpuset = ""
	I1205 21:09:03.370197  328361 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I1205 21:09:03.370211  328361 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I1205 21:09:03.370224  328361 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I1205 21:09:03.370235  328361 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I1205 21:09:03.370245  328361 command_runner.go:130] > # shared_cpuset = ""
	I1205 21:09:03.370256  328361 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I1205 21:09:03.370268  328361 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I1205 21:09:03.370277  328361 command_runner.go:130] > # namespaces_dir = "/var/run"
	I1205 21:09:03.370288  328361 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I1205 21:09:03.370296  328361 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I1205 21:09:03.370306  328361 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I1205 21:09:03.370321  328361 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I1205 21:09:03.370328  328361 command_runner.go:130] > # enable_criu_support = false
	I1205 21:09:03.370336  328361 command_runner.go:130] > # Enable/disable the generation of the container,
	I1205 21:09:03.370345  328361 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I1205 21:09:03.370356  328361 command_runner.go:130] > # enable_pod_events = false
	I1205 21:09:03.370367  328361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 21:09:03.370380  328361 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I1205 21:09:03.370392  328361 command_runner.go:130] > # The name is matched against the runtimes map below.
	I1205 21:09:03.370399  328361 command_runner.go:130] > # default_runtime = "runc"
	I1205 21:09:03.370410  328361 command_runner.go:130] > # A list of paths that, when absent from the host,
	I1205 21:09:03.370424  328361 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I1205 21:09:03.370438  328361 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I1205 21:09:03.370449  328361 command_runner.go:130] > # creation as a file is not desired either.
	I1205 21:09:03.370462  328361 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I1205 21:09:03.370479  328361 command_runner.go:130] > # the hostname is being managed dynamically.
	I1205 21:09:03.370491  328361 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I1205 21:09:03.370500  328361 command_runner.go:130] > # ]
	I1205 21:09:03.370515  328361 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I1205 21:09:03.370529  328361 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I1205 21:09:03.370542  328361 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I1205 21:09:03.370554  328361 command_runner.go:130] > # Each entry in the table should follow the format:
	I1205 21:09:03.370562  328361 command_runner.go:130] > #
	I1205 21:09:03.370573  328361 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I1205 21:09:03.370584  328361 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I1205 21:09:03.370609  328361 command_runner.go:130] > # runtime_type = "oci"
	I1205 21:09:03.370616  328361 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I1205 21:09:03.370624  328361 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I1205 21:09:03.370631  328361 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I1205 21:09:03.370643  328361 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I1205 21:09:03.370649  328361 command_runner.go:130] > # monitor_env = []
	I1205 21:09:03.370660  328361 command_runner.go:130] > # privileged_without_host_devices = false
	I1205 21:09:03.370667  328361 command_runner.go:130] > # allowed_annotations = []
	I1205 21:09:03.370676  328361 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I1205 21:09:03.370682  328361 command_runner.go:130] > # Where:
	I1205 21:09:03.370690  328361 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I1205 21:09:03.370700  328361 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I1205 21:09:03.370709  328361 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I1205 21:09:03.370722  328361 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I1205 21:09:03.370732  328361 command_runner.go:130] > #   in $PATH.
	I1205 21:09:03.370742  328361 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I1205 21:09:03.370753  328361 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I1205 21:09:03.370763  328361 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I1205 21:09:03.370771  328361 command_runner.go:130] > #   state.
	I1205 21:09:03.370781  328361 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I1205 21:09:03.370789  328361 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I1205 21:09:03.370799  328361 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I1205 21:09:03.370812  328361 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I1205 21:09:03.370825  328361 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I1205 21:09:03.370840  328361 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I1205 21:09:03.370850  328361 command_runner.go:130] > #   The currently recognized values are:
	I1205 21:09:03.370860  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I1205 21:09:03.370875  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I1205 21:09:03.370892  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I1205 21:09:03.370905  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I1205 21:09:03.370921  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I1205 21:09:03.370934  328361 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I1205 21:09:03.370947  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I1205 21:09:03.370957  328361 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I1205 21:09:03.370969  328361 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I1205 21:09:03.370982  328361 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I1205 21:09:03.370993  328361 command_runner.go:130] > #   deprecated option "conmon".
	I1205 21:09:03.371008  328361 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I1205 21:09:03.371019  328361 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I1205 21:09:03.371032  328361 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I1205 21:09:03.371043  328361 command_runner.go:130] > #   should be moved to the container's cgroup
	I1205 21:09:03.371058  328361 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I1205 21:09:03.371070  328361 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I1205 21:09:03.371083  328361 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I1205 21:09:03.371095  328361 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I1205 21:09:03.371104  328361 command_runner.go:130] > #
	I1205 21:09:03.371112  328361 command_runner.go:130] > # Using the seccomp notifier feature:
	I1205 21:09:03.371120  328361 command_runner.go:130] > #
	I1205 21:09:03.371129  328361 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I1205 21:09:03.371142  328361 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I1205 21:09:03.371151  328361 command_runner.go:130] > #
	I1205 21:09:03.371161  328361 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I1205 21:09:03.371174  328361 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I1205 21:09:03.371183  328361 command_runner.go:130] > #
	I1205 21:09:03.371193  328361 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I1205 21:09:03.371203  328361 command_runner.go:130] > # feature.
	I1205 21:09:03.371208  328361 command_runner.go:130] > #
	I1205 21:09:03.371223  328361 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I1205 21:09:03.371235  328361 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I1205 21:09:03.371249  328361 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I1205 21:09:03.371259  328361 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I1205 21:09:03.371269  328361 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I1205 21:09:03.371278  328361 command_runner.go:130] > #
	I1205 21:09:03.371292  328361 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I1205 21:09:03.371306  328361 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I1205 21:09:03.371312  328361 command_runner.go:130] > #
	I1205 21:09:03.371326  328361 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I1205 21:09:03.371337  328361 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I1205 21:09:03.371346  328361 command_runner.go:130] > #
	I1205 21:09:03.371359  328361 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I1205 21:09:03.371371  328361 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I1205 21:09:03.371377  328361 command_runner.go:130] > # limitation.
	I1205 21:09:03.371382  328361 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I1205 21:09:03.371392  328361 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I1205 21:09:03.371398  328361 command_runner.go:130] > runtime_type = "oci"
	I1205 21:09:03.371408  328361 command_runner.go:130] > runtime_root = "/run/runc"
	I1205 21:09:03.371414  328361 command_runner.go:130] > runtime_config_path = ""
	I1205 21:09:03.371422  328361 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I1205 21:09:03.371433  328361 command_runner.go:130] > monitor_cgroup = "pod"
	I1205 21:09:03.371440  328361 command_runner.go:130] > monitor_exec_cgroup = ""
	I1205 21:09:03.371449  328361 command_runner.go:130] > monitor_env = [
	I1205 21:09:03.371459  328361 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I1205 21:09:03.371468  328361 command_runner.go:130] > ]
	I1205 21:09:03.371475  328361 command_runner.go:130] > privileged_without_host_devices = false
	I1205 21:09:03.371488  328361 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I1205 21:09:03.371500  328361 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I1205 21:09:03.371516  328361 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I1205 21:09:03.371529  328361 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I1205 21:09:03.371545  328361 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I1205 21:09:03.371558  328361 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I1205 21:09:03.371579  328361 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I1205 21:09:03.371595  328361 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I1205 21:09:03.371609  328361 command_runner.go:130] > # signifying for that resource type to override the default value.
	I1205 21:09:03.371621  328361 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I1205 21:09:03.371627  328361 command_runner.go:130] > # Example:
	I1205 21:09:03.371634  328361 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I1205 21:09:03.371642  328361 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I1205 21:09:03.371650  328361 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I1205 21:09:03.371659  328361 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I1205 21:09:03.371665  328361 command_runner.go:130] > # cpuset = 0
	I1205 21:09:03.371671  328361 command_runner.go:130] > # cpushares = "0-1"
	I1205 21:09:03.371677  328361 command_runner.go:130] > # Where:
	I1205 21:09:03.371686  328361 command_runner.go:130] > # The workload name is workload-type.
	I1205 21:09:03.371693  328361 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I1205 21:09:03.371703  328361 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I1205 21:09:03.371713  328361 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I1205 21:09:03.371725  328361 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I1205 21:09:03.371734  328361 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I1205 21:09:03.371742  328361 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I1205 21:09:03.371753  328361 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I1205 21:09:03.371759  328361 command_runner.go:130] > # Default value is set to true
	I1205 21:09:03.371766  328361 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I1205 21:09:03.371773  328361 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I1205 21:09:03.371777  328361 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I1205 21:09:03.371784  328361 command_runner.go:130] > # Default value is set to 'false'
	I1205 21:09:03.371791  328361 command_runner.go:130] > # disable_hostport_mapping = false
	I1205 21:09:03.371802  328361 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I1205 21:09:03.371807  328361 command_runner.go:130] > #
	I1205 21:09:03.371819  328361 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I1205 21:09:03.371828  328361 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I1205 21:09:03.371842  328361 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I1205 21:09:03.371855  328361 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I1205 21:09:03.371863  328361 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I1205 21:09:03.371872  328361 command_runner.go:130] > [crio.image]
	I1205 21:09:03.371886  328361 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I1205 21:09:03.371896  328361 command_runner.go:130] > # default_transport = "docker://"
	I1205 21:09:03.371909  328361 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I1205 21:09:03.371921  328361 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I1205 21:09:03.371930  328361 command_runner.go:130] > # global_auth_file = ""
	I1205 21:09:03.371941  328361 command_runner.go:130] > # The image used to instantiate infra containers.
	I1205 21:09:03.371952  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.371962  328361 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.10"
	I1205 21:09:03.371977  328361 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I1205 21:09:03.371991  328361 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I1205 21:09:03.372002  328361 command_runner.go:130] > # This option supports live configuration reload.
	I1205 21:09:03.372012  328361 command_runner.go:130] > # pause_image_auth_file = ""
	I1205 21:09:03.372116  328361 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I1205 21:09:03.372146  328361 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I1205 21:09:03.372168  328361 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I1205 21:09:03.372180  328361 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I1205 21:09:03.372272  328361 command_runner.go:130] > # pause_command = "/pause"
	I1205 21:09:03.372297  328361 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I1205 21:09:03.372312  328361 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I1205 21:09:03.372326  328361 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I1205 21:09:03.372341  328361 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I1205 21:09:03.372355  328361 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I1205 21:09:03.372367  328361 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I1205 21:09:03.372379  328361 command_runner.go:130] > # pinned_images = [
	I1205 21:09:03.372390  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372404  328361 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I1205 21:09:03.372420  328361 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I1205 21:09:03.372435  328361 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I1205 21:09:03.372446  328361 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I1205 21:09:03.372454  328361 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I1205 21:09:03.372459  328361 command_runner.go:130] > # signature_policy = ""
	I1205 21:09:03.372470  328361 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I1205 21:09:03.372495  328361 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I1205 21:09:03.372510  328361 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I1205 21:09:03.372524  328361 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I1205 21:09:03.372538  328361 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I1205 21:09:03.372549  328361 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I1205 21:09:03.372565  328361 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I1205 21:09:03.372581  328361 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I1205 21:09:03.372593  328361 command_runner.go:130] > # changing them here.
	I1205 21:09:03.372605  328361 command_runner.go:130] > # insecure_registries = [
	I1205 21:09:03.372615  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372629  328361 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I1205 21:09:03.372642  328361 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I1205 21:09:03.372649  328361 command_runner.go:130] > # image_volumes = "mkdir"
	I1205 21:09:03.372656  328361 command_runner.go:130] > # Temporary directory to use for storing big files
	I1205 21:09:03.372664  328361 command_runner.go:130] > # big_files_temporary_dir = ""
	I1205 21:09:03.372675  328361 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I1205 21:09:03.372683  328361 command_runner.go:130] > # CNI plugins.
	I1205 21:09:03.372689  328361 command_runner.go:130] > [crio.network]
	I1205 21:09:03.372700  328361 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I1205 21:09:03.372719  328361 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I1205 21:09:03.372732  328361 command_runner.go:130] > # cni_default_network = ""
	I1205 21:09:03.372746  328361 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I1205 21:09:03.372758  328361 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I1205 21:09:03.372770  328361 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I1205 21:09:03.372779  328361 command_runner.go:130] > # plugin_dirs = [
	I1205 21:09:03.372786  328361 command_runner.go:130] > # 	"/opt/cni/bin/",
	I1205 21:09:03.372796  328361 command_runner.go:130] > # ]
	I1205 21:09:03.372807  328361 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I1205 21:09:03.372817  328361 command_runner.go:130] > [crio.metrics]
	I1205 21:09:03.372828  328361 command_runner.go:130] > # Globally enable or disable metrics support.
	I1205 21:09:03.372835  328361 command_runner.go:130] > enable_metrics = true
	I1205 21:09:03.372847  328361 command_runner.go:130] > # Specify enabled metrics collectors.
	I1205 21:09:03.372859  328361 command_runner.go:130] > # Per default all metrics are enabled.
	I1205 21:09:03.372878  328361 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I1205 21:09:03.372891  328361 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I1205 21:09:03.372905  328361 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I1205 21:09:03.372915  328361 command_runner.go:130] > # metrics_collectors = [
	I1205 21:09:03.372923  328361 command_runner.go:130] > # 	"operations",
	I1205 21:09:03.372943  328361 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I1205 21:09:03.372952  328361 command_runner.go:130] > # 	"operations_latency_microseconds",
	I1205 21:09:03.372960  328361 command_runner.go:130] > # 	"operations_errors",
	I1205 21:09:03.372969  328361 command_runner.go:130] > # 	"image_pulls_by_digest",
	I1205 21:09:03.372978  328361 command_runner.go:130] > # 	"image_pulls_by_name",
	I1205 21:09:03.372990  328361 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I1205 21:09:03.372998  328361 command_runner.go:130] > # 	"image_pulls_failures",
	I1205 21:09:03.373008  328361 command_runner.go:130] > # 	"image_pulls_successes",
	I1205 21:09:03.373013  328361 command_runner.go:130] > # 	"image_pulls_layer_size",
	I1205 21:09:03.373020  328361 command_runner.go:130] > # 	"image_layer_reuse",
	I1205 21:09:03.373025  328361 command_runner.go:130] > # 	"containers_events_dropped_total",
	I1205 21:09:03.373030  328361 command_runner.go:130] > # 	"containers_oom_total",
	I1205 21:09:03.373035  328361 command_runner.go:130] > # 	"containers_oom",
	I1205 21:09:03.373040  328361 command_runner.go:130] > # 	"processes_defunct",
	I1205 21:09:03.373044  328361 command_runner.go:130] > # 	"operations_total",
	I1205 21:09:03.373049  328361 command_runner.go:130] > # 	"operations_latency_seconds",
	I1205 21:09:03.373054  328361 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I1205 21:09:03.373060  328361 command_runner.go:130] > # 	"operations_errors_total",
	I1205 21:09:03.373065  328361 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I1205 21:09:03.373072  328361 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I1205 21:09:03.373077  328361 command_runner.go:130] > # 	"image_pulls_failure_total",
	I1205 21:09:03.373082  328361 command_runner.go:130] > # 	"image_pulls_success_total",
	I1205 21:09:03.373087  328361 command_runner.go:130] > # 	"image_layer_reuse_total",
	I1205 21:09:03.373092  328361 command_runner.go:130] > # 	"containers_oom_count_total",
	I1205 21:09:03.373098  328361 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I1205 21:09:03.373102  328361 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I1205 21:09:03.373115  328361 command_runner.go:130] > # ]
	I1205 21:09:03.373124  328361 command_runner.go:130] > # The port on which the metrics server will listen.
	I1205 21:09:03.373130  328361 command_runner.go:130] > # metrics_port = 9090
	I1205 21:09:03.373138  328361 command_runner.go:130] > # Local socket path to bind the metrics server to
	I1205 21:09:03.373144  328361 command_runner.go:130] > # metrics_socket = ""
	I1205 21:09:03.373149  328361 command_runner.go:130] > # The certificate for the secure metrics server.
	I1205 21:09:03.373158  328361 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I1205 21:09:03.373167  328361 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I1205 21:09:03.373175  328361 command_runner.go:130] > # certificate on any modification event.
	I1205 21:09:03.373179  328361 command_runner.go:130] > # metrics_cert = ""
	I1205 21:09:03.373187  328361 command_runner.go:130] > # The certificate key for the secure metrics server.
	I1205 21:09:03.373192  328361 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I1205 21:09:03.373196  328361 command_runner.go:130] > # metrics_key = ""
	I1205 21:09:03.373203  328361 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I1205 21:09:03.373207  328361 command_runner.go:130] > [crio.tracing]
	I1205 21:09:03.373213  328361 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I1205 21:09:03.373219  328361 command_runner.go:130] > # enable_tracing = false
	I1205 21:09:03.373224  328361 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I1205 21:09:03.373229  328361 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I1205 21:09:03.373241  328361 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I1205 21:09:03.373250  328361 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I1205 21:09:03.373254  328361 command_runner.go:130] > # CRI-O NRI configuration.
	I1205 21:09:03.373260  328361 command_runner.go:130] > [crio.nri]
	I1205 21:09:03.373264  328361 command_runner.go:130] > # Globally enable or disable NRI.
	I1205 21:09:03.373268  328361 command_runner.go:130] > # enable_nri = false
	I1205 21:09:03.373272  328361 command_runner.go:130] > # NRI socket to listen on.
	I1205 21:09:03.373276  328361 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I1205 21:09:03.373283  328361 command_runner.go:130] > # NRI plugin directory to use.
	I1205 21:09:03.373288  328361 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I1205 21:09:03.373292  328361 command_runner.go:130] > # NRI plugin configuration directory to use.
	I1205 21:09:03.373300  328361 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I1205 21:09:03.373305  328361 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I1205 21:09:03.373312  328361 command_runner.go:130] > # nri_disable_connections = false
	I1205 21:09:03.373317  328361 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I1205 21:09:03.373322  328361 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I1205 21:09:03.373331  328361 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I1205 21:09:03.373338  328361 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I1205 21:09:03.373345  328361 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I1205 21:09:03.373351  328361 command_runner.go:130] > [crio.stats]
	I1205 21:09:03.373357  328361 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I1205 21:09:03.373365  328361 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I1205 21:09:03.373370  328361 command_runner.go:130] > # stats_collection_period = 0
	I1205 21:09:03.373394  328361 command_runner.go:130] ! time="2024-12-05 21:09:03.336802496Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I1205 21:09:03.373413  328361 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I1205 21:09:03.373495  328361 cni.go:84] Creating CNI manager for ""
	I1205 21:09:03.373506  328361 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I1205 21:09:03.373527  328361 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:09:03.373556  328361 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.221 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-784478 NodeName:multinode-784478 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.221"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.221 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:09:03.373686  328361 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.221
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-784478"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.221"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.221"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:09:03.373769  328361 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:09:03.385007  328361 command_runner.go:130] > kubeadm
	I1205 21:09:03.385037  328361 command_runner.go:130] > kubectl
	I1205 21:09:03.385042  328361 command_runner.go:130] > kubelet
	I1205 21:09:03.385070  328361 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:09:03.385124  328361 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:09:03.395115  328361 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:09:03.413601  328361 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:09:03.431025  328361 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2296 bytes)
	I1205 21:09:03.448659  328361 ssh_runner.go:195] Run: grep 192.168.39.221	control-plane.minikube.internal$ /etc/hosts
	I1205 21:09:03.452751  328361 command_runner.go:130] > 192.168.39.221	control-plane.minikube.internal
	I1205 21:09:03.452856  328361 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:09:03.602655  328361 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:09:03.617016  328361 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478 for IP: 192.168.39.221
	I1205 21:09:03.617056  328361 certs.go:194] generating shared ca certs ...
	I1205 21:09:03.617090  328361 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:09:03.617338  328361 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:09:03.617420  328361 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:09:03.617440  328361 certs.go:256] generating profile certs ...
	I1205 21:09:03.617591  328361 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/client.key
	I1205 21:09:03.617720  328361 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key.cf7f1278
	I1205 21:09:03.617775  328361 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key
	I1205 21:09:03.617789  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I1205 21:09:03.617804  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I1205 21:09:03.617820  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1205 21:09:03.617834  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1205 21:09:03.617850  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1205 21:09:03.617866  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1205 21:09:03.617881  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1205 21:09:03.617928  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1205 21:09:03.617995  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:09:03.618032  328361 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:09:03.618045  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:09:03.618070  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:09:03.618097  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:09:03.618125  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:09:03.618168  328361 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:09:03.618203  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem -> /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.618227  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.618249  328361 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.618925  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:09:03.643833  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:09:03.667221  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:09:03.690858  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:09:03.716413  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:09:03.741888  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:09:03.766125  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:09:03.790422  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/multinode-784478/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:09:03.814715  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:09:03.840125  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:09:03.866425  328361 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:09:03.890707  328361 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:09:03.908638  328361 ssh_runner.go:195] Run: openssl version
	I1205 21:09:03.914789  328361 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I1205 21:09:03.914872  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:09:03.925739  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930764  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930898  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.930964  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:09:03.936453  328361 command_runner.go:130] > b5213941
	I1205 21:09:03.936680  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:09:03.946269  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:09:03.957115  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961809  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961850  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.961925  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:09:03.967354  328361 command_runner.go:130] > 51391683
	I1205 21:09:03.967555  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:09:03.977618  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:09:03.989286  328361 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.993976  328361 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.994014  328361 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.994066  328361 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:09:03.999672  328361 command_runner.go:130] > 3ec20f2e
	I1205 21:09:03.999835  328361 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:09:04.009807  328361 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:09:04.014820  328361 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:09:04.014844  328361 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1205 21:09:04.014850  328361 command_runner.go:130] > Device: 253,1	Inode: 8385582     Links: 1
	I1205 21:09:04.014856  328361 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1205 21:09:04.014862  328361 command_runner.go:130] > Access: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014867  328361 command_runner.go:130] > Modify: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014871  328361 command_runner.go:130] > Change: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.014876  328361 command_runner.go:130] >  Birth: 2024-12-05 21:02:12.156328037 +0000
	I1205 21:09:04.015021  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:09:04.021043  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.021139  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:09:04.027104  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.027184  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:09:04.033341  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.033429  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:09:04.039115  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.039193  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:09:04.044953  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.045063  328361 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:09:04.050706  328361 command_runner.go:130] > Certificate will not expire
	I1205 21:09:04.050808  328361 kubeadm.go:392] StartCluster: {Name:multinode-784478 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:multinode-784478 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.221 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.213 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.231 Port:0 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:09:04.050962  328361 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:09:04.051041  328361 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:09:04.088733  328361 command_runner.go:130] > 7ae6c8c63b666abcd70534dc44b983d5a2ac7068a9c0d2735c8f333171e6704a
	I1205 21:09:04.088778  328361 command_runner.go:130] > 23954e6fc7030a61b309420f9c1ee92c19c3163be22fe724e2693f329bc258fa
	I1205 21:09:04.088789  328361 command_runner.go:130] > c4ae8603237b7363d337191b759dff81f3803834d7a78b887283bfec8f374c22
	I1205 21:09:04.088801  328361 command_runner.go:130] > c7439ccea25b3c22c01349b0281f7d919942c651f6cdd50c7420ac9900fdcf97
	I1205 21:09:04.088811  328361 command_runner.go:130] > a035a4c35a3d37f88dda7585de31856477aa11367b734a800adf1e640ee184b8
	I1205 21:09:04.088821  328361 command_runner.go:130] > d54a20ed49c2cb44f1ccfd184701fefdbac3bdd2f0552340c9b9e05fd665b99d
	I1205 21:09:04.088830  328361 command_runner.go:130] > 835f877ded47ea873773ee1bec77d608d253f45badc6fac50e78b4979967c1f3
	I1205 21:09:04.088842  328361 command_runner.go:130] > 34aa39e4fed308ba370200aec168d9a4ac4d311778e26424db4ffdfc05ab9516
	I1205 21:09:04.088851  328361 command_runner.go:130] > f28e9f0aedb62867545b31452c8208ffc66fc1d6d01dd71719292a8e8ed9d2f1
	I1205 21:09:04.088886  328361 cri.go:89] found id: "7ae6c8c63b666abcd70534dc44b983d5a2ac7068a9c0d2735c8f333171e6704a"
	I1205 21:09:04.088899  328361 cri.go:89] found id: "23954e6fc7030a61b309420f9c1ee92c19c3163be22fe724e2693f329bc258fa"
	I1205 21:09:04.088906  328361 cri.go:89] found id: "c4ae8603237b7363d337191b759dff81f3803834d7a78b887283bfec8f374c22"
	I1205 21:09:04.088910  328361 cri.go:89] found id: "c7439ccea25b3c22c01349b0281f7d919942c651f6cdd50c7420ac9900fdcf97"
	I1205 21:09:04.088918  328361 cri.go:89] found id: "a035a4c35a3d37f88dda7585de31856477aa11367b734a800adf1e640ee184b8"
	I1205 21:09:04.088921  328361 cri.go:89] found id: "d54a20ed49c2cb44f1ccfd184701fefdbac3bdd2f0552340c9b9e05fd665b99d"
	I1205 21:09:04.088925  328361 cri.go:89] found id: "835f877ded47ea873773ee1bec77d608d253f45badc6fac50e78b4979967c1f3"
	I1205 21:09:04.088928  328361 cri.go:89] found id: "34aa39e4fed308ba370200aec168d9a4ac4d311778e26424db4ffdfc05ab9516"
	I1205 21:09:04.088931  328361 cri.go:89] found id: "f28e9f0aedb62867545b31452c8208ffc66fc1d6d01dd71719292a8e8ed9d2f1"
	I1205 21:09:04.088937  328361 cri.go:89] found id: ""
	I1205 21:09:04.088993  328361 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-784478 -n multinode-784478
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-784478 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (145.29s)

                                                
                                    
x
+
TestPreload (171.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455032 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E1205 21:18:16.325920  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455032 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.295275252s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455032 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-455032 image pull gcr.io/k8s-minikube/busybox: (2.496660074s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-455032
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-455032: (7.305318647s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-455032 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-455032 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m3.601064788s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455032 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:629: *** TestPreload FAILED at 2024-12-05 21:20:03.773995847 +0000 UTC m=+3645.748588508
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-455032 -n test-preload-455032
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-455032 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-455032 logs -n 25: (1.158742439s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478 sudo cat                                       | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt                       | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m02:/home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n                                                                 | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | multinode-784478-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-784478 ssh -n multinode-784478-m02 sudo cat                                   | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	|         | /home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-784478 node stop m03                                                          | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:04 UTC |
	| node    | multinode-784478 node start                                                             | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:04 UTC | 05 Dec 24 21:05 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                      |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| stop    | -p multinode-784478                                                                     | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:05 UTC |                     |
	| start   | -p multinode-784478                                                                     | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:07 UTC | 05 Dec 24 21:10 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC |                     |
	| node    | multinode-784478 node delete                                                            | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC | 05 Dec 24 21:10 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-784478 stop                                                                   | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:10 UTC |                     |
	| start   | -p multinode-784478                                                                     | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:13 UTC | 05 Dec 24 21:16 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-784478                                                                | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:16 UTC |                     |
	| start   | -p multinode-784478-m02                                                                 | multinode-784478-m02 | jenkins | v1.34.0 | 05 Dec 24 21:16 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-784478-m03                                                                 | multinode-784478-m03 | jenkins | v1.34.0 | 05 Dec 24 21:16 UTC | 05 Dec 24 21:17 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-784478                                                                 | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:17 UTC |                     |
	| delete  | -p multinode-784478-m03                                                                 | multinode-784478-m03 | jenkins | v1.34.0 | 05 Dec 24 21:17 UTC | 05 Dec 24 21:17 UTC |
	| delete  | -p multinode-784478                                                                     | multinode-784478     | jenkins | v1.34.0 | 05 Dec 24 21:17 UTC | 05 Dec 24 21:17 UTC |
	| start   | -p test-preload-455032                                                                  | test-preload-455032  | jenkins | v1.34.0 | 05 Dec 24 21:17 UTC | 05 Dec 24 21:18 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-455032 image pull                                                          | test-preload-455032  | jenkins | v1.34.0 | 05 Dec 24 21:18 UTC | 05 Dec 24 21:18 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-455032                                                                  | test-preload-455032  | jenkins | v1.34.0 | 05 Dec 24 21:18 UTC | 05 Dec 24 21:18 UTC |
	| start   | -p test-preload-455032                                                                  | test-preload-455032  | jenkins | v1.34.0 | 05 Dec 24 21:18 UTC | 05 Dec 24 21:20 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-455032 image list                                                          | test-preload-455032  | jenkins | v1.34.0 | 05 Dec 24 21:20 UTC | 05 Dec 24 21:20 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:18:59
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:18:59.989890  333265 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:18:59.990028  333265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:18:59.990037  333265 out.go:358] Setting ErrFile to fd 2...
	I1205 21:18:59.990041  333265 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:18:59.990233  333265 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:18:59.990788  333265 out.go:352] Setting JSON to false
	I1205 21:18:59.991752  333265 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14488,"bootTime":1733419052,"procs":180,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:18:59.991869  333265 start.go:139] virtualization: kvm guest
	I1205 21:18:59.994287  333265 out.go:177] * [test-preload-455032] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:18:59.995965  333265 notify.go:220] Checking for updates...
	I1205 21:18:59.995977  333265 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:18:59.997404  333265 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:18:59.998640  333265 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:19:00.000229  333265 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:19:00.001753  333265 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:19:00.003109  333265 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:19:00.004808  333265 config.go:182] Loaded profile config "test-preload-455032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 21:19:00.005299  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:00.005365  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:00.020784  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41701
	I1205 21:19:00.021322  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:00.021983  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:00.022007  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:00.022419  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:00.022627  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:00.024507  333265 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:19:00.025839  333265 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:19:00.026323  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:00.026389  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:00.041703  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45301
	I1205 21:19:00.042178  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:00.042781  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:00.042810  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:00.043154  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:00.043400  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:00.081873  333265 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:19:00.083157  333265 start.go:297] selected driver: kvm2
	I1205 21:19:00.083185  333265 start.go:901] validating driver "kvm2" against &{Name:test-preload-455032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.24.4 ClusterName:test-preload-455032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:19:00.083339  333265 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:19:00.084209  333265 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:19:00.084315  333265 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:19:00.100777  333265 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:19:00.101343  333265 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:19:00.101400  333265 cni.go:84] Creating CNI manager for ""
	I1205 21:19:00.101451  333265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:19:00.101525  333265 start.go:340] cluster config:
	{Name:test-preload-455032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-455032 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:19:00.102196  333265 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:19:00.104362  333265 out.go:177] * Starting "test-preload-455032" primary control-plane node in "test-preload-455032" cluster
	I1205 21:19:00.105819  333265 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 21:19:00.136606  333265 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 21:19:00.136663  333265 cache.go:56] Caching tarball of preloaded images
	I1205 21:19:00.136838  333265 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 21:19:00.138694  333265 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I1205 21:19:00.140064  333265 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 21:19:00.171206  333265 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I1205 21:19:06.016436  333265 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 21:19:06.016552  333265 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I1205 21:19:06.903180  333265 cache.go:59] Finished verifying existence of preloaded tar for v1.24.4 on crio
	I1205 21:19:06.903318  333265 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/config.json ...
	I1205 21:19:06.903556  333265 start.go:360] acquireMachinesLock for test-preload-455032: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:19:06.903633  333265 start.go:364] duration metric: took 52.81µs to acquireMachinesLock for "test-preload-455032"
	I1205 21:19:06.903656  333265 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:19:06.903667  333265 fix.go:54] fixHost starting: 
	I1205 21:19:06.903955  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:06.904001  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:06.919605  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44481
	I1205 21:19:06.920133  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:06.920734  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:06.920771  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:06.921105  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:06.921318  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:06.921475  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetState
	I1205 21:19:06.923250  333265 fix.go:112] recreateIfNeeded on test-preload-455032: state=Stopped err=<nil>
	I1205 21:19:06.923311  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	W1205 21:19:06.923477  333265 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:19:06.925446  333265 out.go:177] * Restarting existing kvm2 VM for "test-preload-455032" ...
	I1205 21:19:06.926746  333265 main.go:141] libmachine: (test-preload-455032) Calling .Start
	I1205 21:19:06.926934  333265 main.go:141] libmachine: (test-preload-455032) Ensuring networks are active...
	I1205 21:19:06.927803  333265 main.go:141] libmachine: (test-preload-455032) Ensuring network default is active
	I1205 21:19:06.928138  333265 main.go:141] libmachine: (test-preload-455032) Ensuring network mk-test-preload-455032 is active
	I1205 21:19:06.928511  333265 main.go:141] libmachine: (test-preload-455032) Getting domain xml...
	I1205 21:19:06.929321  333265 main.go:141] libmachine: (test-preload-455032) Creating domain...
	I1205 21:19:08.161658  333265 main.go:141] libmachine: (test-preload-455032) Waiting to get IP...
	I1205 21:19:08.162616  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:08.163001  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:08.163082  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:08.163009  333317 retry.go:31] will retry after 239.635209ms: waiting for machine to come up
	I1205 21:19:08.404927  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:08.405555  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:08.405584  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:08.405513  333317 retry.go:31] will retry after 253.346477ms: waiting for machine to come up
	I1205 21:19:08.661182  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:08.661659  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:08.661694  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:08.661595  333317 retry.go:31] will retry after 448.317929ms: waiting for machine to come up
	I1205 21:19:09.111349  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:09.111872  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:09.111896  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:09.111816  333317 retry.go:31] will retry after 415.364016ms: waiting for machine to come up
	I1205 21:19:09.528598  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:09.529166  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:09.529199  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:09.529098  333317 retry.go:31] will retry after 733.232324ms: waiting for machine to come up
	I1205 21:19:10.264162  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:10.264621  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:10.264652  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:10.264547  333317 retry.go:31] will retry after 840.627442ms: waiting for machine to come up
	I1205 21:19:11.107114  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:11.107596  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:11.107628  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:11.107515  333317 retry.go:31] will retry after 1.034571678s: waiting for machine to come up
	I1205 21:19:12.143357  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:12.143729  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:12.143759  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:12.143689  333317 retry.go:31] will retry after 1.303627899s: waiting for machine to come up
	I1205 21:19:13.449318  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:13.449763  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:13.449793  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:13.449702  333317 retry.go:31] will retry after 1.400448538s: waiting for machine to come up
	I1205 21:19:14.852100  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:14.852528  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:14.852576  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:14.852452  333317 retry.go:31] will retry after 2.036064584s: waiting for machine to come up
	I1205 21:19:16.891763  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:16.892279  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:16.892311  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:16.892222  333317 retry.go:31] will retry after 2.849254171s: waiting for machine to come up
	I1205 21:19:19.742898  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:19.743332  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:19.743363  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:19.743277  333317 retry.go:31] will retry after 2.589769258s: waiting for machine to come up
	I1205 21:19:22.335209  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:22.335604  333265 main.go:141] libmachine: (test-preload-455032) DBG | unable to find current IP address of domain test-preload-455032 in network mk-test-preload-455032
	I1205 21:19:22.335636  333265 main.go:141] libmachine: (test-preload-455032) DBG | I1205 21:19:22.335555  333317 retry.go:31] will retry after 4.291577358s: waiting for machine to come up
	I1205 21:19:26.632186  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.632647  333265 main.go:141] libmachine: (test-preload-455032) Found IP for machine: 192.168.39.155
	I1205 21:19:26.632670  333265 main.go:141] libmachine: (test-preload-455032) Reserving static IP address...
	I1205 21:19:26.632682  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has current primary IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.633032  333265 main.go:141] libmachine: (test-preload-455032) Reserved static IP address: 192.168.39.155
	I1205 21:19:26.633049  333265 main.go:141] libmachine: (test-preload-455032) Waiting for SSH to be available...
	I1205 21:19:26.633067  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "test-preload-455032", mac: "52:54:00:6d:62:98", ip: "192.168.39.155"} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.633087  333265 main.go:141] libmachine: (test-preload-455032) DBG | skip adding static IP to network mk-test-preload-455032 - found existing host DHCP lease matching {name: "test-preload-455032", mac: "52:54:00:6d:62:98", ip: "192.168.39.155"}
	I1205 21:19:26.633100  333265 main.go:141] libmachine: (test-preload-455032) DBG | Getting to WaitForSSH function...
	I1205 21:19:26.635504  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.635890  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.635918  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.636057  333265 main.go:141] libmachine: (test-preload-455032) DBG | Using SSH client type: external
	I1205 21:19:26.636088  333265 main.go:141] libmachine: (test-preload-455032) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa (-rw-------)
	I1205 21:19:26.636119  333265 main.go:141] libmachine: (test-preload-455032) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.155 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:19:26.636134  333265 main.go:141] libmachine: (test-preload-455032) DBG | About to run SSH command:
	I1205 21:19:26.636152  333265 main.go:141] libmachine: (test-preload-455032) DBG | exit 0
	I1205 21:19:26.762445  333265 main.go:141] libmachine: (test-preload-455032) DBG | SSH cmd err, output: <nil>: 
	I1205 21:19:26.762857  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetConfigRaw
	I1205 21:19:26.763735  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetIP
	I1205 21:19:26.766544  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.766953  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.766989  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.767240  333265 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/config.json ...
	I1205 21:19:26.767466  333265 machine.go:93] provisionDockerMachine start ...
	I1205 21:19:26.767487  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:26.767727  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:26.770067  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.770450  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.770477  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.770669  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:26.770867  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:26.771050  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:26.771204  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:26.771446  333265 main.go:141] libmachine: Using SSH client type: native
	I1205 21:19:26.771703  333265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1205 21:19:26.771717  333265 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:19:26.874389  333265 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:19:26.874425  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetMachineName
	I1205 21:19:26.874745  333265 buildroot.go:166] provisioning hostname "test-preload-455032"
	I1205 21:19:26.874776  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetMachineName
	I1205 21:19:26.875024  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:26.877597  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.878067  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.878097  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.878321  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:26.878565  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:26.878743  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:26.878998  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:26.879234  333265 main.go:141] libmachine: Using SSH client type: native
	I1205 21:19:26.879483  333265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1205 21:19:26.879501  333265 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-455032 && echo "test-preload-455032" | sudo tee /etc/hostname
	I1205 21:19:26.996069  333265 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-455032
	
	I1205 21:19:26.996110  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:26.998889  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.999222  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:26.999248  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:26.999513  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:26.999804  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.000004  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.000192  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.000441  333265 main.go:141] libmachine: Using SSH client type: native
	I1205 21:19:27.000659  333265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1205 21:19:27.000676  333265 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-455032' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-455032/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-455032' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:19:27.115154  333265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:19:27.115195  333265 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:19:27.115252  333265 buildroot.go:174] setting up certificates
	I1205 21:19:27.115298  333265 provision.go:84] configureAuth start
	I1205 21:19:27.115330  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetMachineName
	I1205 21:19:27.115661  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetIP
	I1205 21:19:27.118891  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.119322  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.119367  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.119522  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.121767  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.122059  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.122090  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.122217  333265 provision.go:143] copyHostCerts
	I1205 21:19:27.122299  333265 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:19:27.122329  333265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:19:27.122427  333265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:19:27.122549  333265 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:19:27.122560  333265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:19:27.122598  333265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:19:27.122681  333265 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:19:27.122691  333265 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:19:27.122725  333265 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:19:27.122794  333265 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.test-preload-455032 san=[127.0.0.1 192.168.39.155 localhost minikube test-preload-455032]
	I1205 21:19:27.253726  333265 provision.go:177] copyRemoteCerts
	I1205 21:19:27.253826  333265 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:19:27.253867  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.256906  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.257234  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.257268  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.257521  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.257800  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.258007  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.258154  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:27.339880  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:19:27.363933  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:19:27.387376  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:19:27.410693  333265 provision.go:87] duration metric: took 295.378843ms to configureAuth
	I1205 21:19:27.410725  333265 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:19:27.410898  333265 config.go:182] Loaded profile config "test-preload-455032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 21:19:27.410983  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.413804  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.414152  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.414179  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.414423  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.414631  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.414789  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.414892  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.415023  333265 main.go:141] libmachine: Using SSH client type: native
	I1205 21:19:27.415186  333265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1205 21:19:27.415200  333265 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:19:27.631176  333265 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:19:27.631203  333265 machine.go:96] duration metric: took 863.723875ms to provisionDockerMachine
	I1205 21:19:27.631216  333265 start.go:293] postStartSetup for "test-preload-455032" (driver="kvm2")
	I1205 21:19:27.631235  333265 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:19:27.631260  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:27.631631  333265 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:19:27.631690  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.634708  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.635093  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.635129  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.635311  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.635533  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.635695  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.635833  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:27.716425  333265 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:19:27.720586  333265 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:19:27.720620  333265 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:19:27.720698  333265 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:19:27.720785  333265 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:19:27.720889  333265 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:19:27.730539  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:19:27.756059  333265 start.go:296] duration metric: took 124.815398ms for postStartSetup
	I1205 21:19:27.756108  333265 fix.go:56] duration metric: took 20.852442601s for fixHost
	I1205 21:19:27.756132  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.759026  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.759486  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.759520  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.759727  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.759972  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.760126  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.760276  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.760447  333265 main.go:141] libmachine: Using SSH client type: native
	I1205 21:19:27.760698  333265 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.155 22 <nil> <nil>}
	I1205 21:19:27.760714  333265 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:19:27.862891  333265 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733433567.820235255
	
	I1205 21:19:27.862924  333265 fix.go:216] guest clock: 1733433567.820235255
	I1205 21:19:27.862935  333265 fix.go:229] Guest: 2024-12-05 21:19:27.820235255 +0000 UTC Remote: 2024-12-05 21:19:27.756112867 +0000 UTC m=+27.806199125 (delta=64.122388ms)
	I1205 21:19:27.862963  333265 fix.go:200] guest clock delta is within tolerance: 64.122388ms
	I1205 21:19:27.862970  333265 start.go:83] releasing machines lock for "test-preload-455032", held for 20.959323495s
	I1205 21:19:27.863014  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:27.863330  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetIP
	I1205 21:19:27.866227  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.866548  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.866581  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.866754  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:27.867335  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:27.867528  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:27.867682  333265 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:19:27.867727  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.867754  333265 ssh_runner.go:195] Run: cat /version.json
	I1205 21:19:27.867784  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:27.870447  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.870839  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.870873  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.870890  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.871059  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.871261  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.871342  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:27.871365  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:27.871441  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.871546  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:27.871630  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:27.871710  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:27.871858  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:27.871990  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:27.946856  333265 ssh_runner.go:195] Run: systemctl --version
	I1205 21:19:27.972055  333265 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:19:28.117313  333265 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:19:28.123043  333265 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:19:28.123142  333265 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:19:28.140381  333265 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:19:28.140423  333265 start.go:495] detecting cgroup driver to use...
	I1205 21:19:28.140494  333265 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:19:28.159466  333265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:19:28.174184  333265 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:19:28.174257  333265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:19:28.187941  333265 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:19:28.202187  333265 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:19:28.313935  333265 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:19:28.464315  333265 docker.go:233] disabling docker service ...
	I1205 21:19:28.464401  333265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:19:28.478491  333265 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:19:28.491379  333265 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:19:28.619175  333265 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:19:28.734972  333265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:19:28.748872  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:19:28.767903  333265 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I1205 21:19:28.768002  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.778880  333265 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:19:28.778961  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.789639  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.800308  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.810797  333265 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:19:28.821478  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.832128  333265 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.849192  333265 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:19:28.859760  333265 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:19:28.869276  333265 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:19:28.869350  333265 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:19:28.881828  333265 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:19:28.891867  333265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:19:29.000729  333265 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:19:29.089639  333265 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:19:29.089747  333265 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:19:29.094589  333265 start.go:563] Will wait 60s for crictl version
	I1205 21:19:29.094671  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:29.098290  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:19:29.134836  333265 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:19:29.134922  333265 ssh_runner.go:195] Run: crio --version
	I1205 21:19:29.166048  333265 ssh_runner.go:195] Run: crio --version
	I1205 21:19:29.195831  333265 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.29.1 ...
	I1205 21:19:29.197158  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetIP
	I1205 21:19:29.200212  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:29.200577  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:29.200612  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:29.200814  333265 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:19:29.205215  333265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:19:29.218008  333265 kubeadm.go:883] updating cluster {Name:test-preload-455032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.24.4 ClusterName:test-preload-455032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker
MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:19:29.218157  333265 preload.go:131] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I1205 21:19:29.218223  333265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:19:29.252211  333265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 21:19:29.252312  333265 ssh_runner.go:195] Run: which lz4
	I1205 21:19:29.256265  333265 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:19:29.260294  333265 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:19:29.260333  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I1205 21:19:30.646670  333265 crio.go:462] duration metric: took 1.390444556s to copy over tarball
	I1205 21:19:30.646759  333265 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:19:33.085235  333265 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.438432483s)
	I1205 21:19:33.085275  333265 crio.go:469] duration metric: took 2.438564353s to extract the tarball
	I1205 21:19:33.085287  333265 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:19:33.128244  333265 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:19:33.176072  333265 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I1205 21:19:33.176101  333265 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:19:33.176212  333265 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:19:33.176219  333265 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.176253  333265 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.176294  333265 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.176303  333265 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.176355  333265 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.176360  333265 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.176270  333265 image.go:135] retrieving image: registry.k8s.io/pause:3.7
	I1205 21:19:33.177934  333265 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:19:33.177956  333265 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.177934  333265 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.177937  333265 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.177988  333265 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.178000  333265 image.go:178] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I1205 21:19:33.178017  333265 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.177988  333265 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.332706  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.332802  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I1205 21:19:33.339440  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.343931  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.348670  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.384740  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.438952  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.444027  333265 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I1205 21:19:33.444082  333265 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I1205 21:19:33.444117  333265 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.444089  333265 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I1205 21:19:33.444171  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.444185  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.473474  333265 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I1205 21:19:33.473527  333265 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.473598  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.489896  333265 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I1205 21:19:33.489977  333265 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.490048  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.490642  333265 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I1205 21:19:33.490687  333265 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.490749  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.490683  333265 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I1205 21:19:33.490841  333265 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.490888  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.515266  333265 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I1205 21:19:33.515326  333265 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.515354  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.515404  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.515414  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 21:19:33.515366  333265 ssh_runner.go:195] Run: which crictl
	I1205 21:19:33.515484  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.515452  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.515558  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.644887  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.644955  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.644985  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.645006  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.645146  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.645186  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.645151  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 21:19:33.796996  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.797025  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I1205 21:19:33.797124  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I1205 21:19:33.797124  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I1205 21:19:33.797199  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I1205 21:19:33.797319  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I1205 21:19:33.797373  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I1205 21:19:33.949620  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I1205 21:19:33.949741  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 21:19:33.949750  333265 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I1205 21:19:33.949796  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I1205 21:19:33.949820  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I1205 21:19:33.949886  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.3-0
	I1205 21:19:33.949893  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 21:19:33.949919  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I1205 21:19:33.949985  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I1205 21:19:33.950008  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 21:19:33.950015  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I1205 21:19:33.950056  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.8.6
	I1205 21:19:33.950077  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.7
	I1205 21:19:33.964925  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I1205 21:19:33.964969  333265 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 21:19:33.965035  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I1205 21:19:33.968780  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I1205 21:19:33.968852  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I1205 21:19:33.969010  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I1205 21:19:33.970979  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I1205 21:19:33.996098  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I1205 21:19:33.996217  333265 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I1205 21:19:33.996333  333265 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 21:19:34.334025  333265 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:19:36.732935  333265 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.767868384s)
	I1205 21:19:36.732986  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I1205 21:19:36.732985  333265 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.24.4: (2.736632398s)
	I1205 21:19:36.733012  333265 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I1205 21:19:36.733014  333265 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.7
	I1205 21:19:36.733071  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I1205 21:19:36.733081  333265 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (2.399019221s)
	I1205 21:19:36.878257  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I1205 21:19:36.878319  333265 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I1205 21:19:36.878366  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I1205 21:19:37.223592  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I1205 21:19:37.223653  333265 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 21:19:37.223725  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I1205 21:19:38.069137  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I1205 21:19:38.069192  333265 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I1205 21:19:38.069249  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I1205 21:19:40.120265  333265 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.05099061s)
	I1205 21:19:40.120298  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I1205 21:19:40.120328  333265 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 21:19:40.120392  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I1205 21:19:40.574753  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I1205 21:19:40.574812  333265 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 21:19:40.574870  333265 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I1205 21:19:41.331011  333265 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I1205 21:19:41.331073  333265 cache_images.go:123] Successfully loaded all cached images
	I1205 21:19:41.331083  333265 cache_images.go:92] duration metric: took 8.154962777s to LoadCachedImages
	I1205 21:19:41.331101  333265 kubeadm.go:934] updating node { 192.168.39.155 8443 v1.24.4 crio true true} ...
	I1205 21:19:41.331331  333265 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-455032 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.155
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-455032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:19:41.331437  333265 ssh_runner.go:195] Run: crio config
	I1205 21:19:41.387485  333265 cni.go:84] Creating CNI manager for ""
	I1205 21:19:41.387514  333265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:19:41.387525  333265 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:19:41.387544  333265 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.155 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-455032 NodeName:test-preload-455032 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.155"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.155 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:19:41.387671  333265 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.155
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-455032"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.155
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.155"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:19:41.387757  333265 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I1205 21:19:41.398330  333265 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:19:41.398410  333265 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:19:41.409251  333265 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I1205 21:19:41.427889  333265 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:19:41.446630  333265 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I1205 21:19:41.464106  333265 ssh_runner.go:195] Run: grep 192.168.39.155	control-plane.minikube.internal$ /etc/hosts
	I1205 21:19:41.468045  333265 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.155	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:19:41.480315  333265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:19:41.606867  333265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:19:41.624019  333265 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032 for IP: 192.168.39.155
	I1205 21:19:41.624053  333265 certs.go:194] generating shared ca certs ...
	I1205 21:19:41.624080  333265 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:19:41.624289  333265 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:19:41.624354  333265 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:19:41.624368  333265 certs.go:256] generating profile certs ...
	I1205 21:19:41.624479  333265 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/client.key
	I1205 21:19:41.624582  333265 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/apiserver.key.b0267fdd
	I1205 21:19:41.624633  333265 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/proxy-client.key
	I1205 21:19:41.624741  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:19:41.624779  333265 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:19:41.624794  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:19:41.624894  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:19:41.624934  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:19:41.624956  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:19:41.625000  333265 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:19:41.625712  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:19:41.669877  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:19:41.696662  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:19:41.734723  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:19:41.759347  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:19:41.784055  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:19:41.815423  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:19:41.851594  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:19:41.877594  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:19:41.904919  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:19:41.931568  333265 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:19:41.957676  333265 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:19:41.976763  333265 ssh_runner.go:195] Run: openssl version
	I1205 21:19:41.982922  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:19:41.994499  333265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:19:41.999248  333265 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:19:41.999333  333265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:19:42.005221  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:19:42.016912  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:19:42.028145  333265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:19:42.032699  333265 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:19:42.032774  333265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:19:42.038541  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:19:42.049748  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:19:42.060838  333265 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:19:42.065439  333265 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:19:42.065514  333265 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:19:42.071236  333265 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:19:42.082343  333265 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:19:42.087410  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:19:42.093770  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:19:42.099924  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:19:42.106363  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:19:42.112442  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:19:42.118771  333265 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:19:42.124862  333265 kubeadm.go:392] StartCluster: {Name:test-preload-455032 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
24.4 ClusterName:test-preload-455032 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:19:42.124975  333265 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:19:42.125031  333265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:19:42.162286  333265 cri.go:89] found id: ""
	I1205 21:19:42.162366  333265 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:19:42.172746  333265 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:19:42.172776  333265 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:19:42.172827  333265 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:19:42.183186  333265 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:19:42.183749  333265 kubeconfig.go:47] verify endpoint returned: get endpoint: "test-preload-455032" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:19:42.183908  333265 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "test-preload-455032" cluster setting kubeconfig missing "test-preload-455032" context setting]
	I1205 21:19:42.184301  333265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:19:42.185068  333265 kapi.go:59] client config for test-preload-455032: &rest.Config{Host:"https://192.168.39.155:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 21:19:42.185868  333265 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:19:42.196234  333265 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.155
	I1205 21:19:42.196279  333265 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:19:42.196304  333265 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:19:42.196370  333265 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:19:42.235685  333265 cri.go:89] found id: ""
	I1205 21:19:42.235789  333265 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:19:42.253051  333265 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:19:42.263179  333265 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:19:42.263201  333265 kubeadm.go:157] found existing configuration files:
	
	I1205 21:19:42.263253  333265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:19:42.273130  333265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:19:42.273211  333265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:19:42.283144  333265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:19:42.292685  333265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:19:42.292763  333265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:19:42.302499  333265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:19:42.311941  333265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:19:42.312017  333265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:19:42.321692  333265 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:19:42.330915  333265 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:19:42.330978  333265 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:19:42.340605  333265 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:19:42.350412  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:42.441978  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:43.247864  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:43.511956  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:43.589180  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:43.671427  333265 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:19:43.671526  333265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:19:44.171665  333265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:19:44.672600  333265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:19:44.697726  333265 api_server.go:72] duration metric: took 1.02629977s to wait for apiserver process to appear ...
	I1205 21:19:44.697763  333265 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:19:44.697790  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:44.698345  333265 api_server.go:269] stopped: https://192.168.39.155:8443/healthz: Get "https://192.168.39.155:8443/healthz": dial tcp 192.168.39.155:8443: connect: connection refused
	I1205 21:19:45.198205  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:45.198895  333265 api_server.go:269] stopped: https://192.168.39.155:8443/healthz: Get "https://192.168.39.155:8443/healthz": dial tcp 192.168.39.155:8443: connect: connection refused
	I1205 21:19:45.698586  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:48.565826  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:19:48.565861  333265 api_server.go:103] status: https://192.168.39.155:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:19:48.565882  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:48.590422  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:19:48.590456  333265 api_server.go:103] status: https://192.168.39.155:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:19:48.698819  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:48.730815  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:19:48.730866  333265 api_server.go:103] status: https://192.168.39.155:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:19:49.198362  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:49.205918  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:19:49.205957  333265 api_server.go:103] status: https://192.168.39.155:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:19:49.698531  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:49.705324  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:19:49.705357  333265 api_server.go:103] status: https://192.168.39.155:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:19:50.198512  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:19:50.205809  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 200:
	ok
	I1205 21:19:50.213111  333265 api_server.go:141] control plane version: v1.24.4
	I1205 21:19:50.213145  333265 api_server.go:131] duration metric: took 5.515374692s to wait for apiserver health ...
	I1205 21:19:50.213156  333265 cni.go:84] Creating CNI manager for ""
	I1205 21:19:50.213170  333265 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:19:50.214806  333265 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:19:50.216114  333265 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:19:50.226289  333265 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:19:50.247921  333265 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:19:50.248057  333265 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1205 21:19:50.248099  333265 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1205 21:19:50.257018  333265 system_pods.go:59] 8 kube-system pods found
	I1205 21:19:50.257060  333265 system_pods.go:61] "coredns-6d4b75cb6d-7mzxj" [34ec55fb-3ca5-4d4e-9866-4b27a86f6004] Running
	I1205 21:19:50.257065  333265 system_pods.go:61] "coredns-6d4b75cb6d-tdwhd" [e7cb6abb-5f5b-4b5c-9c7f-73d40261989a] Running
	I1205 21:19:50.257069  333265 system_pods.go:61] "etcd-test-preload-455032" [1c722f29-bc3b-46a9-94ba-cda753ac2ec6] Running
	I1205 21:19:50.257075  333265 system_pods.go:61] "kube-apiserver-test-preload-455032" [61b30a0a-602a-4736-a147-b0e223371f4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:19:50.257080  333265 system_pods.go:61] "kube-controller-manager-test-preload-455032" [7c8c395a-4e96-4d34-912c-736ae95debf1] Running
	I1205 21:19:50.257084  333265 system_pods.go:61] "kube-proxy-xn2b8" [37dd4fa8-49fe-4640-87e5-a87e750cdd2a] Running
	I1205 21:19:50.257088  333265 system_pods.go:61] "kube-scheduler-test-preload-455032" [b1aab190-6232-4744-9847-0f23cda3fc1e] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:19:50.257093  333265 system_pods.go:61] "storage-provisioner" [e606d2c1-c57a-4a60-b224-a435994066bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:19:50.257101  333265 system_pods.go:74] duration metric: took 9.147012ms to wait for pod list to return data ...
	I1205 21:19:50.257110  333265 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:19:50.260609  333265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:19:50.260641  333265 node_conditions.go:123] node cpu capacity is 2
	I1205 21:19:50.260662  333265 node_conditions.go:105] duration metric: took 3.546915ms to run NodePressure ...
	I1205 21:19:50.260687  333265 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:19:50.437012  333265 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:19:50.441867  333265 kubeadm.go:739] kubelet initialised
	I1205 21:19:50.441929  333265 kubeadm.go:740] duration metric: took 4.878687ms waiting for restarted kubelet to initialise ...
	I1205 21:19:50.441945  333265 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:19:50.454382  333265 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:50.460923  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.460958  333265 pod_ready.go:82] duration metric: took 6.542741ms for pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:50.460969  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.460977  333265 pod_ready.go:79] waiting up to 4m0s for pod "coredns-6d4b75cb6d-tdwhd" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:50.466818  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "coredns-6d4b75cb6d-tdwhd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.466868  333265 pod_ready.go:82] duration metric: took 5.881667ms for pod "coredns-6d4b75cb6d-tdwhd" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:50.466880  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "coredns-6d4b75cb6d-tdwhd" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.466889  333265 pod_ready.go:79] waiting up to 4m0s for pod "etcd-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:50.472947  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "etcd-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.472979  333265 pod_ready.go:82] duration metric: took 6.081779ms for pod "etcd-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:50.472991  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "etcd-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.472999  333265 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:50.652669  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "kube-apiserver-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.652703  333265 pod_ready.go:82] duration metric: took 179.692239ms for pod "kube-apiserver-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:50.652715  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "kube-apiserver-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:50.652728  333265 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:51.052426  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.052466  333265 pod_ready.go:82] duration metric: took 399.725569ms for pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:51.052480  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.052492  333265 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-xn2b8" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:51.453314  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "kube-proxy-xn2b8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.453358  333265 pod_ready.go:82] duration metric: took 400.852246ms for pod "kube-proxy-xn2b8" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:51.453372  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "kube-proxy-xn2b8" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.453383  333265 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:51.852888  333265 pod_ready.go:98] node "test-preload-455032" hosting pod "kube-scheduler-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.852926  333265 pod_ready.go:82] duration metric: took 399.533724ms for pod "kube-scheduler-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	E1205 21:19:51.852940  333265 pod_ready.go:67] WaitExtra: waitPodCondition: node "test-preload-455032" hosting pod "kube-scheduler-test-preload-455032" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:51.852951  333265 pod_ready.go:39] duration metric: took 1.410991167s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:19:51.852975  333265 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:19:51.864962  333265 ops.go:34] apiserver oom_adj: -16
	I1205 21:19:51.864993  333265 kubeadm.go:597] duration metric: took 9.69220917s to restartPrimaryControlPlane
	I1205 21:19:51.865008  333265 kubeadm.go:394] duration metric: took 9.740158329s to StartCluster
	I1205 21:19:51.865033  333265 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:19:51.865125  333265 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:19:51.866001  333265 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:19:51.866323  333265 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.155 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:19:51.866452  333265 config.go:182] Loaded profile config "test-preload-455032": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I1205 21:19:51.866411  333265 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:19:51.866519  333265 addons.go:69] Setting default-storageclass=true in profile "test-preload-455032"
	I1205 21:19:51.866521  333265 addons.go:69] Setting storage-provisioner=true in profile "test-preload-455032"
	I1205 21:19:51.866548  333265 addons.go:234] Setting addon storage-provisioner=true in "test-preload-455032"
	I1205 21:19:51.866557  333265 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-455032"
	W1205 21:19:51.866561  333265 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:19:51.866594  333265 host.go:66] Checking if "test-preload-455032" exists ...
	I1205 21:19:51.866913  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:51.866957  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:51.867014  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:51.867054  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:51.868073  333265 out.go:177] * Verifying Kubernetes components...
	I1205 21:19:51.869632  333265 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:19:51.882932  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41797
	I1205 21:19:51.882932  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33835
	I1205 21:19:51.883499  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:51.883635  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:51.883983  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:51.884010  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:51.884269  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:51.884303  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:51.884435  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:51.884639  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:51.884867  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetState
	I1205 21:19:51.885057  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:51.885104  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:51.887736  333265 kapi.go:59] client config for test-preload-455032: &rest.Config{Host:"https://192.168.39.155:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/client.crt", KeyFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/profiles/test-preload-455032/client.key", CAFile:"/home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uin
t8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x243b6e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1205 21:19:51.888098  333265 addons.go:234] Setting addon default-storageclass=true in "test-preload-455032"
	W1205 21:19:51.888121  333265 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:19:51.888154  333265 host.go:66] Checking if "test-preload-455032" exists ...
	I1205 21:19:51.888564  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:51.888620  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:51.901979  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44231
	I1205 21:19:51.902624  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:51.903263  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:51.903295  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:51.903667  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:51.903893  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetState
	I1205 21:19:51.904513  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41107
	I1205 21:19:51.905050  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:51.905656  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:51.905687  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:51.905974  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:51.906075  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:51.906687  333265 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:19:51.906747  333265 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:19:51.908116  333265 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:19:51.909765  333265 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:19:51.909790  333265 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:19:51.909815  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:51.913390  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:51.913808  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:51.913857  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:51.914019  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:51.914315  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:51.914507  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:51.914709  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:51.946445  333265 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33977
	I1205 21:19:51.947017  333265 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:19:51.947624  333265 main.go:141] libmachine: Using API Version  1
	I1205 21:19:51.947652  333265 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:19:51.947987  333265 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:19:51.948248  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetState
	I1205 21:19:51.949959  333265 main.go:141] libmachine: (test-preload-455032) Calling .DriverName
	I1205 21:19:51.950203  333265 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:19:51.950224  333265 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:19:51.950246  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHHostname
	I1205 21:19:51.953022  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:51.953405  333265 main.go:141] libmachine: (test-preload-455032) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6d:62:98", ip: ""} in network mk-test-preload-455032: {Iface:virbr1 ExpiryTime:2024-12-05 22:19:17 +0000 UTC Type:0 Mac:52:54:00:6d:62:98 Iaid: IPaddr:192.168.39.155 Prefix:24 Hostname:test-preload-455032 Clientid:01:52:54:00:6d:62:98}
	I1205 21:19:51.953436  333265 main.go:141] libmachine: (test-preload-455032) DBG | domain test-preload-455032 has defined IP address 192.168.39.155 and MAC address 52:54:00:6d:62:98 in network mk-test-preload-455032
	I1205 21:19:51.953627  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHPort
	I1205 21:19:51.953828  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHKeyPath
	I1205 21:19:51.954013  333265 main.go:141] libmachine: (test-preload-455032) Calling .GetSSHUsername
	I1205 21:19:51.954166  333265 sshutil.go:53] new ssh client: &{IP:192.168.39.155 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/test-preload-455032/id_rsa Username:docker}
	I1205 21:19:52.025202  333265 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:19:52.045881  333265 node_ready.go:35] waiting up to 6m0s for node "test-preload-455032" to be "Ready" ...
	I1205 21:19:52.105576  333265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:19:52.126567  333265 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:19:53.056292  333265 main.go:141] libmachine: Making call to close driver server
	I1205 21:19:53.056324  333265 main.go:141] libmachine: (test-preload-455032) Calling .Close
	I1205 21:19:53.056623  333265 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:19:53.056642  333265 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:19:53.056653  333265 main.go:141] libmachine: Making call to close driver server
	I1205 21:19:53.056660  333265 main.go:141] libmachine: (test-preload-455032) Calling .Close
	I1205 21:19:53.056916  333265 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:19:53.056935  333265 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:19:53.065722  333265 main.go:141] libmachine: Making call to close driver server
	I1205 21:19:53.065746  333265 main.go:141] libmachine: (test-preload-455032) Calling .Close
	I1205 21:19:53.066068  333265 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:19:53.066092  333265 main.go:141] libmachine: (test-preload-455032) DBG | Closing plugin on server side
	I1205 21:19:53.066096  333265 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:19:53.092553  333265 main.go:141] libmachine: Making call to close driver server
	I1205 21:19:53.092592  333265 main.go:141] libmachine: (test-preload-455032) Calling .Close
	I1205 21:19:53.092922  333265 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:19:53.092945  333265 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:19:53.092953  333265 main.go:141] libmachine: Making call to close driver server
	I1205 21:19:53.092958  333265 main.go:141] libmachine: (test-preload-455032) DBG | Closing plugin on server side
	I1205 21:19:53.092960  333265 main.go:141] libmachine: (test-preload-455032) Calling .Close
	I1205 21:19:53.093234  333265 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:19:53.093257  333265 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:19:53.095045  333265 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 21:19:53.096275  333265 addons.go:510] duration metric: took 1.229877419s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1205 21:19:54.049706  333265 node_ready.go:53] node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:56.556179  333265 node_ready.go:53] node "test-preload-455032" has status "Ready":"False"
	I1205 21:19:59.053053  333265 node_ready.go:49] node "test-preload-455032" has status "Ready":"True"
	I1205 21:19:59.053093  333265 node_ready.go:38] duration metric: took 7.007154404s for node "test-preload-455032" to be "Ready" ...
	I1205 21:19:59.053109  333265 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:19:59.060549  333265 pod_ready.go:79] waiting up to 6m0s for pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.066270  333265 pod_ready.go:93] pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace has status "Ready":"True"
	I1205 21:19:59.066322  333265 pod_ready.go:82] duration metric: took 5.714087ms for pod "coredns-6d4b75cb6d-7mzxj" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.066337  333265 pod_ready.go:79] waiting up to 6m0s for pod "etcd-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.073743  333265 pod_ready.go:93] pod "etcd-test-preload-455032" in "kube-system" namespace has status "Ready":"True"
	I1205 21:19:59.073771  333265 pod_ready.go:82] duration metric: took 7.426592ms for pod "etcd-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.073785  333265 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.078637  333265 pod_ready.go:93] pod "kube-apiserver-test-preload-455032" in "kube-system" namespace has status "Ready":"True"
	I1205 21:19:59.078661  333265 pod_ready.go:82] duration metric: took 4.869694ms for pod "kube-apiserver-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.078671  333265 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.083578  333265 pod_ready.go:93] pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace has status "Ready":"True"
	I1205 21:19:59.083613  333265 pod_ready.go:82] duration metric: took 4.928166ms for pod "kube-controller-manager-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.083625  333265 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xn2b8" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.451741  333265 pod_ready.go:93] pod "kube-proxy-xn2b8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:19:59.451774  333265 pod_ready.go:82] duration metric: took 368.141114ms for pod "kube-proxy-xn2b8" in "kube-system" namespace to be "Ready" ...
	I1205 21:19:59.451788  333265 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:20:01.460105  333265 pod_ready.go:103] pod "kube-scheduler-test-preload-455032" in "kube-system" namespace has status "Ready":"False"
	I1205 21:20:02.958946  333265 pod_ready.go:93] pod "kube-scheduler-test-preload-455032" in "kube-system" namespace has status "Ready":"True"
	I1205 21:20:02.958974  333265 pod_ready.go:82] duration metric: took 3.507178903s for pod "kube-scheduler-test-preload-455032" in "kube-system" namespace to be "Ready" ...
	I1205 21:20:02.958986  333265 pod_ready.go:39] duration metric: took 3.905861432s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:20:02.959001  333265 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:20:02.959063  333265 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:20:02.974221  333265 api_server.go:72] duration metric: took 11.107859099s to wait for apiserver process to appear ...
	I1205 21:20:02.974261  333265 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:20:02.974287  333265 api_server.go:253] Checking apiserver healthz at https://192.168.39.155:8443/healthz ...
	I1205 21:20:02.979738  333265 api_server.go:279] https://192.168.39.155:8443/healthz returned 200:
	ok
	I1205 21:20:02.980788  333265 api_server.go:141] control plane version: v1.24.4
	I1205 21:20:02.980813  333265 api_server.go:131] duration metric: took 6.543034ms to wait for apiserver health ...
	I1205 21:20:02.980835  333265 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:20:02.985747  333265 system_pods.go:59] 7 kube-system pods found
	I1205 21:20:02.985778  333265 system_pods.go:61] "coredns-6d4b75cb6d-7mzxj" [34ec55fb-3ca5-4d4e-9866-4b27a86f6004] Running
	I1205 21:20:02.985785  333265 system_pods.go:61] "etcd-test-preload-455032" [1c722f29-bc3b-46a9-94ba-cda753ac2ec6] Running
	I1205 21:20:02.985790  333265 system_pods.go:61] "kube-apiserver-test-preload-455032" [61b30a0a-602a-4736-a147-b0e223371f4b] Running
	I1205 21:20:02.985795  333265 system_pods.go:61] "kube-controller-manager-test-preload-455032" [7c8c395a-4e96-4d34-912c-736ae95debf1] Running
	I1205 21:20:02.985799  333265 system_pods.go:61] "kube-proxy-xn2b8" [37dd4fa8-49fe-4640-87e5-a87e750cdd2a] Running
	I1205 21:20:02.985803  333265 system_pods.go:61] "kube-scheduler-test-preload-455032" [b1aab190-6232-4744-9847-0f23cda3fc1e] Running
	I1205 21:20:02.985811  333265 system_pods.go:61] "storage-provisioner" [e606d2c1-c57a-4a60-b224-a435994066bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:20:02.985821  333265 system_pods.go:74] duration metric: took 4.977436ms to wait for pod list to return data ...
	I1205 21:20:02.985838  333265 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:20:03.050170  333265 default_sa.go:45] found service account: "default"
	I1205 21:20:03.050202  333265 default_sa.go:55] duration metric: took 64.353524ms for default service account to be created ...
	I1205 21:20:03.050215  333265 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:20:03.253630  333265 system_pods.go:86] 7 kube-system pods found
	I1205 21:20:03.253673  333265 system_pods.go:89] "coredns-6d4b75cb6d-7mzxj" [34ec55fb-3ca5-4d4e-9866-4b27a86f6004] Running
	I1205 21:20:03.253682  333265 system_pods.go:89] "etcd-test-preload-455032" [1c722f29-bc3b-46a9-94ba-cda753ac2ec6] Running
	I1205 21:20:03.253687  333265 system_pods.go:89] "kube-apiserver-test-preload-455032" [61b30a0a-602a-4736-a147-b0e223371f4b] Running
	I1205 21:20:03.253693  333265 system_pods.go:89] "kube-controller-manager-test-preload-455032" [7c8c395a-4e96-4d34-912c-736ae95debf1] Running
	I1205 21:20:03.253698  333265 system_pods.go:89] "kube-proxy-xn2b8" [37dd4fa8-49fe-4640-87e5-a87e750cdd2a] Running
	I1205 21:20:03.253702  333265 system_pods.go:89] "kube-scheduler-test-preload-455032" [b1aab190-6232-4744-9847-0f23cda3fc1e] Running
	I1205 21:20:03.253715  333265 system_pods.go:89] "storage-provisioner" [e606d2c1-c57a-4a60-b224-a435994066bb] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:20:03.253729  333265 system_pods.go:126] duration metric: took 203.503055ms to wait for k8s-apps to be running ...
	I1205 21:20:03.253747  333265 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:20:03.253809  333265 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:20:03.268805  333265 system_svc.go:56] duration metric: took 15.046013ms WaitForService to wait for kubelet
	I1205 21:20:03.268844  333265 kubeadm.go:582] duration metric: took 11.402492896s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:20:03.268872  333265 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:20:03.450830  333265 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:20:03.450860  333265 node_conditions.go:123] node cpu capacity is 2
	I1205 21:20:03.450872  333265 node_conditions.go:105] duration metric: took 181.994929ms to run NodePressure ...
	I1205 21:20:03.450886  333265 start.go:241] waiting for startup goroutines ...
	I1205 21:20:03.450892  333265 start.go:246] waiting for cluster config update ...
	I1205 21:20:03.450910  333265 start.go:255] writing updated cluster config ...
	I1205 21:20:03.451211  333265 ssh_runner.go:195] Run: rm -f paused
	I1205 21:20:03.502436  333265 start.go:600] kubectl: 1.31.3, cluster: 1.24.4 (minor skew: 7)
	I1205 21:20:03.504421  333265 out.go:201] 
	W1205 21:20:03.505825  333265 out.go:270] ! /usr/local/bin/kubectl is version 1.31.3, which may have incompatibilities with Kubernetes 1.24.4.
	I1205 21:20:03.507094  333265 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I1205 21:20:03.508535  333265 out.go:177] * Done! kubectl is now configured to use "test-preload-455032" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.427273649Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733433604427248140,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6456906e-0411-4280-ba56-6c0457318068 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.427980993Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5124244c-8775-4a3e-9c33-6f9c88f32e67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.428029621Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5124244c-8775-4a3e-9c33-6f9c88f32e67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.428242857Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d422b2d3245914132aa3e6c2be7b5fb15db19a186dbfb3d8ab487bd1db11217,PodSandboxId:5e3bea462f4f6689c3de5754894085fe2c004aeb35b29f000961e5fc947dd7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733433596750183923,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7mzxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ec55fb-3ca5-4d4e-9866-4b27a86f6004,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45,PodSandboxId:494e9c0818b7c91306e377b94b6c98f0360a59d75901cb835b0f606f9557577b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733433589759279507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: e606d2c1-c57a-4a60-b224-a435994066bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5c39c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c9c6fa8053ee0a58b61f467f5e96aef01cb52b933980992e35c7590fb888c4,PodSandboxId:d0156602accf3bf67bf78f5ff74b168c3770efa33ec2d5d7a5e4b9770200d123,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733433589361899609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xn2b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37dd
4fa8-49fe-4640-87e5-a87e750cdd2a,},Annotations:map[string]string{io.kubernetes.container.hash: d6a4516d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c12d36f69fd0827f10b6a770babad4445531d957a4bcd91b2415a8e2bf1f548,PodSandboxId:2011aaf528abb6037c69bc4458a99caeecd0feede29c65fab7db3fb732f315f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733433584334423511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffaa9697d44ada0696d67e976bdd2a53,},Annota
tions:map[string]string{io.kubernetes.container.hash: cfe95fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6943a91d267467019416a71de886474cb7be2a527ad9c489095ab28f480d96fc,PodSandboxId:076272cafeb74875497c24643d5cca1acdc5ffee5e44b1553164e669de1cc320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733433584380841233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82615bab774c0bc7f3126593e3e9607c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: af073b73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b416a501d5db7209677ed0c7c275daea213388acc706f28c1de0bc3b524d40cd,PodSandboxId:a6cdf47fac55a49d2e4362abf80ea96c8043326f2bb35d71495f4ae483bb6228,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733433584295638916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0a4ca669c986fce8e881f89bb273e4,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9412b0ed3bed963f3c937b65d2c459f24726e9d939b5f659dcef3d8a1730f39e,PodSandboxId:bfa90bc95f9fe915e5809e48a49da0ae16e149801c7e3fc00148c3e790e371e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733433584321249209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012bbd8aa2b2890986824aac06e3d01c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5124244c-8775-4a3e-9c33-6f9c88f32e67 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.464782259Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36d490d9-ce20-4dd4-a37e-1426307f8e33 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.464863796Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36d490d9-ce20-4dd4-a37e-1426307f8e33 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.465953010Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2a03f193-fe8f-431b-9812-5ca634f44bea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.466468418Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733433604466441853,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2a03f193-fe8f-431b-9812-5ca634f44bea name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.467133741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87c7d323-1d45-4bfc-889d-fe3dd6e363de name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.467192351Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87c7d323-1d45-4bfc-889d-fe3dd6e363de name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.467354290Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d422b2d3245914132aa3e6c2be7b5fb15db19a186dbfb3d8ab487bd1db11217,PodSandboxId:5e3bea462f4f6689c3de5754894085fe2c004aeb35b29f000961e5fc947dd7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733433596750183923,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7mzxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ec55fb-3ca5-4d4e-9866-4b27a86f6004,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45,PodSandboxId:494e9c0818b7c91306e377b94b6c98f0360a59d75901cb835b0f606f9557577b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733433589759279507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: e606d2c1-c57a-4a60-b224-a435994066bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5c39c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c9c6fa8053ee0a58b61f467f5e96aef01cb52b933980992e35c7590fb888c4,PodSandboxId:d0156602accf3bf67bf78f5ff74b168c3770efa33ec2d5d7a5e4b9770200d123,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733433589361899609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xn2b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37dd
4fa8-49fe-4640-87e5-a87e750cdd2a,},Annotations:map[string]string{io.kubernetes.container.hash: d6a4516d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c12d36f69fd0827f10b6a770babad4445531d957a4bcd91b2415a8e2bf1f548,PodSandboxId:2011aaf528abb6037c69bc4458a99caeecd0feede29c65fab7db3fb732f315f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733433584334423511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffaa9697d44ada0696d67e976bdd2a53,},Annota
tions:map[string]string{io.kubernetes.container.hash: cfe95fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6943a91d267467019416a71de886474cb7be2a527ad9c489095ab28f480d96fc,PodSandboxId:076272cafeb74875497c24643d5cca1acdc5ffee5e44b1553164e669de1cc320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733433584380841233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82615bab774c0bc7f3126593e3e9607c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: af073b73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b416a501d5db7209677ed0c7c275daea213388acc706f28c1de0bc3b524d40cd,PodSandboxId:a6cdf47fac55a49d2e4362abf80ea96c8043326f2bb35d71495f4ae483bb6228,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733433584295638916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0a4ca669c986fce8e881f89bb273e4,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9412b0ed3bed963f3c937b65d2c459f24726e9d939b5f659dcef3d8a1730f39e,PodSandboxId:bfa90bc95f9fe915e5809e48a49da0ae16e149801c7e3fc00148c3e790e371e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733433584321249209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012bbd8aa2b2890986824aac06e3d01c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87c7d323-1d45-4bfc-889d-fe3dd6e363de name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.504183611Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dad78593-9e81-42e6-b6bf-b3a075af55e6 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.504257656Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dad78593-9e81-42e6-b6bf-b3a075af55e6 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.505301075Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5eb0343a-93a5-46dc-94bb-ee0eb1cbc58e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.505719492Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733433604505700147,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5eb0343a-93a5-46dc-94bb-ee0eb1cbc58e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.506404327Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc94958f-6fdc-411f-af01-1a5dfe43a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.506457406Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc94958f-6fdc-411f-af01-1a5dfe43a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.506634679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d422b2d3245914132aa3e6c2be7b5fb15db19a186dbfb3d8ab487bd1db11217,PodSandboxId:5e3bea462f4f6689c3de5754894085fe2c004aeb35b29f000961e5fc947dd7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733433596750183923,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7mzxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ec55fb-3ca5-4d4e-9866-4b27a86f6004,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45,PodSandboxId:494e9c0818b7c91306e377b94b6c98f0360a59d75901cb835b0f606f9557577b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733433589759279507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: e606d2c1-c57a-4a60-b224-a435994066bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5c39c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c9c6fa8053ee0a58b61f467f5e96aef01cb52b933980992e35c7590fb888c4,PodSandboxId:d0156602accf3bf67bf78f5ff74b168c3770efa33ec2d5d7a5e4b9770200d123,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733433589361899609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xn2b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37dd
4fa8-49fe-4640-87e5-a87e750cdd2a,},Annotations:map[string]string{io.kubernetes.container.hash: d6a4516d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c12d36f69fd0827f10b6a770babad4445531d957a4bcd91b2415a8e2bf1f548,PodSandboxId:2011aaf528abb6037c69bc4458a99caeecd0feede29c65fab7db3fb732f315f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733433584334423511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffaa9697d44ada0696d67e976bdd2a53,},Annota
tions:map[string]string{io.kubernetes.container.hash: cfe95fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6943a91d267467019416a71de886474cb7be2a527ad9c489095ab28f480d96fc,PodSandboxId:076272cafeb74875497c24643d5cca1acdc5ffee5e44b1553164e669de1cc320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733433584380841233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82615bab774c0bc7f3126593e3e9607c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: af073b73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b416a501d5db7209677ed0c7c275daea213388acc706f28c1de0bc3b524d40cd,PodSandboxId:a6cdf47fac55a49d2e4362abf80ea96c8043326f2bb35d71495f4ae483bb6228,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733433584295638916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0a4ca669c986fce8e881f89bb273e4,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9412b0ed3bed963f3c937b65d2c459f24726e9d939b5f659dcef3d8a1730f39e,PodSandboxId:bfa90bc95f9fe915e5809e48a49da0ae16e149801c7e3fc00148c3e790e371e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733433584321249209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012bbd8aa2b2890986824aac06e3d01c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cc94958f-6fdc-411f-af01-1a5dfe43a1c7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.539710341Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6442e0de-93d3-4d31-bce4-d27c067a2dd7 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.539784779Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6442e0de-93d3-4d31-bce4-d27c067a2dd7 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.540933921Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d946f6b2-ca9d-4bda-921d-ad172a9fb4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.541415078Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733433604541391816,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:119830,},InodesUsed:&UInt64Value{Value:76,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d946f6b2-ca9d-4bda-921d-ad172a9fb4f7 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.541982415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e7288184-79df-45a5-adc6-2d819fcdb4a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.542034042Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e7288184-79df-45a5-adc6-2d819fcdb4a2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:20:04 test-preload-455032 crio[684]: time="2024-12-05 21:20:04.542249732Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:7d422b2d3245914132aa3e6c2be7b5fb15db19a186dbfb3d8ab487bd1db11217,PodSandboxId:5e3bea462f4f6689c3de5754894085fe2c004aeb35b29f000961e5fc947dd7f5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,State:CONTAINER_RUNNING,CreatedAt:1733433596750183923,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-7mzxj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34ec55fb-3ca5-4d4e-9866-4b27a86f6004,},Annotations:map[string]string{io.kubernetes.container.hash: 1f28e69,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45,PodSandboxId:494e9c0818b7c91306e377b94b6c98f0360a59d75901cb835b0f606f9557577b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733433589759279507,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-s
ystem,io.kubernetes.pod.uid: e606d2c1-c57a-4a60-b224-a435994066bb,},Annotations:map[string]string{io.kubernetes.container.hash: 4b5c39c9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:70c9c6fa8053ee0a58b61f467f5e96aef01cb52b933980992e35c7590fb888c4,PodSandboxId:d0156602accf3bf67bf78f5ff74b168c3770efa33ec2d5d7a5e4b9770200d123,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,State:CONTAINER_RUNNING,CreatedAt:1733433589361899609,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xn2b8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37dd
4fa8-49fe-4640-87e5-a87e750cdd2a,},Annotations:map[string]string{io.kubernetes.container.hash: d6a4516d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8c12d36f69fd0827f10b6a770babad4445531d957a4bcd91b2415a8e2bf1f548,PodSandboxId:2011aaf528abb6037c69bc4458a99caeecd0feede29c65fab7db3fb732f315f4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,State:CONTAINER_RUNNING,CreatedAt:1733433584334423511,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ffaa9697d44ada0696d67e976bdd2a53,},Annota
tions:map[string]string{io.kubernetes.container.hash: cfe95fd,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6943a91d267467019416a71de886474cb7be2a527ad9c489095ab28f480d96fc,PodSandboxId:076272cafeb74875497c24643d5cca1acdc5ffee5e44b1553164e669de1cc320,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,State:CONTAINER_RUNNING,CreatedAt:1733433584380841233,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 82615bab774c0bc7f3126593e3e9607c,},Annotations:map[st
ring]string{io.kubernetes.container.hash: af073b73,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b416a501d5db7209677ed0c7c275daea213388acc706f28c1de0bc3b524d40cd,PodSandboxId:a6cdf47fac55a49d2e4362abf80ea96c8043326f2bb35d71495f4ae483bb6228,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,State:CONTAINER_RUNNING,CreatedAt:1733433584295638916,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: aa0a4ca669c986fce8e881f89bb273e4,},An
notations:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9412b0ed3bed963f3c937b65d2c459f24726e9d939b5f659dcef3d8a1730f39e,PodSandboxId:bfa90bc95f9fe915e5809e48a49da0ae16e149801c7e3fc00148c3e790e371e4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,State:CONTAINER_RUNNING,CreatedAt:1733433584321249209,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-455032,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 012bbd8aa2b2890986824aac06e3d01c,},Annotations:m
ap[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e7288184-79df-45a5-adc6-2d819fcdb4a2 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7d422b2d32459       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   7 seconds ago       Running             coredns                   1                   5e3bea462f4f6       coredns-6d4b75cb6d-7mzxj
	b8ac0728ab7fd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 seconds ago      Exited              storage-provisioner       2                   494e9c0818b7c       storage-provisioner
	70c9c6fa8053e       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   15 seconds ago      Running             kube-proxy                1                   d0156602accf3       kube-proxy-xn2b8
	6943a91d26746       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   076272cafeb74       kube-apiserver-test-preload-455032
	8c12d36f69fd0       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   2011aaf528abb       etcd-test-preload-455032
	9412b0ed3bed9       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   20 seconds ago      Running             kube-scheduler            1                   bfa90bc95f9fe       kube-scheduler-test-preload-455032
	b416a501d5db7       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   a6cdf47fac55a       kube-controller-manager-test-preload-455032
	
	
	==> coredns [7d422b2d3245914132aa3e6c2be7b5fb15db19a186dbfb3d8ab487bd1db11217] <==
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:47811 - 43481 "HINFO IN 4670751384562261649.7952864172694331571. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.113802117s
	
	
	==> describe nodes <==
	Name:               test-preload-455032
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-455032
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=test-preload-455032
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_18_32_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:18:28 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-455032
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:19:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:19:58 +0000   Thu, 05 Dec 2024 21:18:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:19:58 +0000   Thu, 05 Dec 2024 21:18:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:19:58 +0000   Thu, 05 Dec 2024 21:18:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:19:58 +0000   Thu, 05 Dec 2024 21:19:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.155
	  Hostname:    test-preload-455032
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 1da8d34aa35c46a9aec82f55382e2fff
	  System UUID:                1da8d34a-a35c-46a9-aec8-2f55382e2fff
	  Boot ID:                    b5d412f6-1f55-44cf-b99d-8c4daf238d0a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-7mzxj                       100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     80s
	  kube-system                 etcd-test-preload-455032                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         92s
	  kube-system                 kube-apiserver-test-preload-455032             250m (12%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-test-preload-455032    200m (10%)    0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-xn2b8                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-test-preload-455032             100m (5%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         78s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 78s                  kube-proxy       
	  Normal  Starting                 15s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  100s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 100s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  100s (x4 over 100s)  kubelet          Node test-preload-455032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     100s (x3 over 100s)  kubelet          Node test-preload-455032 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    100s (x4 over 100s)  kubelet          Node test-preload-455032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     92s                  kubelet          Node test-preload-455032 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  92s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  92s                  kubelet          Node test-preload-455032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    92s                  kubelet          Node test-preload-455032 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                82s                  kubelet          Node test-preload-455032 status is now: NodeReady
	  Normal  RegisteredNode           81s                  node-controller  Node test-preload-455032 event: Registered Node test-preload-455032 in Controller
	  Normal  Starting                 21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-455032 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-455032 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-455032 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4s                   node-controller  Node test-preload-455032 event: Registered Node test-preload-455032 in Controller
	
	
	==> dmesg <==
	[Dec 5 21:19] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050133] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037527] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.865968] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.979699] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.602246] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.311514] systemd-fstab-generator[607]: Ignoring "noauto" option for root device
	[  +0.056311] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.063536] systemd-fstab-generator[619]: Ignoring "noauto" option for root device
	[  +0.177402] systemd-fstab-generator[633]: Ignoring "noauto" option for root device
	[  +0.122678] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.265219] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[ +12.599770] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[  +0.062190] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.835081] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device
	[  +4.835923] kauditd_printk_skb: 105 callbacks suppressed
	[  +3.663801] systemd-fstab-generator[1829]: Ignoring "noauto" option for root device
	[  +4.641888] kauditd_printk_skb: 59 callbacks suppressed
	[Dec 5 21:20] kauditd_printk_skb: 13 callbacks suppressed
	
	
	==> etcd [8c12d36f69fd0827f10b6a770babad4445531d957a4bcd91b2415a8e2bf1f548] <==
	{"level":"info","ts":"2024-12-05T21:19:44.777Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"d5feb64dae7dc398","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2024-12-05T21:19:44.777Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2024-12-05T21:19:44.785Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 switched to configuration voters=(15419962618919371672)"}
	{"level":"info","ts":"2024-12-05T21:19:44.795Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8ce003150ae53570","local-member-id":"d5feb64dae7dc398","added-peer-id":"d5feb64dae7dc398","added-peer-peer-urls":["https://192.168.39.155:2380"]}
	{"level":"info","ts":"2024-12-05T21:19:44.798Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8ce003150ae53570","local-member-id":"d5feb64dae7dc398","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:19:44.794Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T21:19:44.794Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.155:2380"}
	{"level":"info","ts":"2024-12-05T21:19:44.798Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.155:2380"}
	{"level":"info","ts":"2024-12-05T21:19:44.798Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:19:44.798Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"d5feb64dae7dc398","initial-advertise-peer-urls":["https://192.168.39.155:2380"],"listen-peer-urls":["https://192.168.39.155:2380"],"advertise-client-urls":["https://192.168.39.155:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.155:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T21:19:44.798Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 received MsgPreVoteResp from d5feb64dae7dc398 at term 2"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 received MsgVoteResp from d5feb64dae7dc398 at term 3"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"d5feb64dae7dc398 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T21:19:46.137Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: d5feb64dae7dc398 elected leader d5feb64dae7dc398 at term 3"}
	{"level":"info","ts":"2024-12-05T21:19:46.143Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"d5feb64dae7dc398","local-member-attributes":"{Name:test-preload-455032 ClientURLs:[https://192.168.39.155:2379]}","request-path":"/0/members/d5feb64dae7dc398/attributes","cluster-id":"8ce003150ae53570","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:19:46.143Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:19:46.144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:19:46.144Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:19:46.144Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:19:46.145Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:19:46.145Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.155:2379"}
	
	
	==> kernel <==
	 21:20:04 up 0 min,  0 users,  load average: 0.81, 0.24, 0.08
	Linux test-preload-455032 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6943a91d267467019416a71de886474cb7be2a527ad9c489095ab28f480d96fc] <==
	I1205 21:19:48.477980       1 establishing_controller.go:76] Starting EstablishingController
	I1205 21:19:48.478195       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I1205 21:19:48.478236       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I1205 21:19:48.478254       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1205 21:19:48.486269       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 21:19:48.498983       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 21:19:48.574509       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I1205 21:19:48.574848       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	I1205 21:19:48.575750       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 21:19:48.590656       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E1205 21:19:48.610559       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I1205 21:19:48.640471       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I1205 21:19:48.668167       1 cache.go:39] Caches are synced for autoregister controller
	I1205 21:19:48.669352       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I1205 21:19:48.686289       1 shared_informer.go:262] Caches are synced for node_authorizer
	I1205 21:19:49.161333       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I1205 21:19:49.479324       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 21:19:49.799744       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I1205 21:19:50.310129       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I1205 21:19:50.323954       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I1205 21:19:50.362711       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I1205 21:19:50.381452       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 21:19:50.390330       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 21:20:00.895458       1 controller.go:611] quota admission added evaluator for: endpoints
	I1205 21:20:00.897577       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [b416a501d5db7209677ed0c7c275daea213388acc706f28c1de0bc3b524d40cd] <==
	I1205 21:20:00.936395       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kubelet-client
	I1205 21:20:00.936431       1 shared_informer.go:262] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I1205 21:20:00.937599       1 shared_informer.go:262] Caches are synced for bootstrap_signer
	I1205 21:20:00.941164       1 shared_informer.go:262] Caches are synced for taint
	I1205 21:20:00.941237       1 shared_informer.go:262] Caches are synced for TTL
	I1205 21:20:00.941261       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	I1205 21:20:00.941361       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	W1205 21:20:00.941369       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-455032. Assuming now as a timestamp.
	I1205 21:20:00.941497       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I1205 21:20:00.941747       1 event.go:294] "Event occurred" object="test-preload-455032" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-455032 event: Registered Node test-preload-455032 in Controller"
	I1205 21:20:00.942752       1 shared_informer.go:262] Caches are synced for crt configmap
	I1205 21:20:00.944106       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I1205 21:20:00.945927       1 shared_informer.go:262] Caches are synced for job
	I1205 21:20:00.989934       1 shared_informer.go:262] Caches are synced for TTL after finished
	I1205 21:20:01.071528       1 shared_informer.go:262] Caches are synced for persistent volume
	I1205 21:20:01.086874       1 shared_informer.go:262] Caches are synced for PVC protection
	I1205 21:20:01.096175       1 shared_informer.go:262] Caches are synced for ephemeral
	I1205 21:20:01.102467       1 shared_informer.go:262] Caches are synced for attach detach
	I1205 21:20:01.104372       1 shared_informer.go:262] Caches are synced for expand
	I1205 21:20:01.120214       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 21:20:01.125456       1 shared_informer.go:262] Caches are synced for resource quota
	I1205 21:20:01.136364       1 shared_informer.go:262] Caches are synced for stateful set
	I1205 21:20:01.544823       1 shared_informer.go:262] Caches are synced for garbage collector
	I1205 21:20:01.544941       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I1205 21:20:01.566584       1 shared_informer.go:262] Caches are synced for garbage collector
	
	
	==> kube-proxy [70c9c6fa8053ee0a58b61f467f5e96aef01cb52b933980992e35c7590fb888c4] <==
	I1205 21:19:49.728377       1 node.go:163] Successfully retrieved node IP: 192.168.39.155
	I1205 21:19:49.728531       1 server_others.go:138] "Detected node IP" address="192.168.39.155"
	I1205 21:19:49.728594       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I1205 21:19:49.783196       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I1205 21:19:49.783230       1 server_others.go:206] "Using iptables Proxier"
	I1205 21:19:49.783959       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I1205 21:19:49.784674       1 server.go:661] "Version info" version="v1.24.4"
	I1205 21:19:49.784704       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:19:49.786248       1 config.go:317] "Starting service config controller"
	I1205 21:19:49.786631       1 shared_informer.go:255] Waiting for caches to sync for service config
	I1205 21:19:49.786665       1 config.go:226] "Starting endpoint slice config controller"
	I1205 21:19:49.786670       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I1205 21:19:49.787541       1 config.go:444] "Starting node config controller"
	I1205 21:19:49.787548       1 shared_informer.go:255] Waiting for caches to sync for node config
	I1205 21:19:49.887595       1 shared_informer.go:262] Caches are synced for node config
	I1205 21:19:49.887645       1 shared_informer.go:262] Caches are synced for service config
	I1205 21:19:49.887680       1 shared_informer.go:262] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [9412b0ed3bed963f3c937b65d2c459f24726e9d939b5f659dcef3d8a1730f39e] <==
	I1205 21:19:44.981724       1 serving.go:348] Generated self-signed cert in-memory
	I1205 21:19:48.669795       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I1205 21:19:48.672138       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:19:48.699739       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1205 21:19:48.700126       1 shared_informer.go:255] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1205 21:19:48.700227       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 21:19:48.700266       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:19:48.700303       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1205 21:19:48.700349       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1205 21:19:48.701406       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1205 21:19:48.702205       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1205 21:19:48.801315       1 shared_informer.go:262] Caches are synced for RequestHeaderAuthRequestController
	I1205 21:19:48.801741       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:19:48.803763       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: I1205 21:19:48.795543    1136 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a-config-volume\") pod \"e7cb6abb-5f5b-4b5c-9c7f-73d40261989a\" (UID: \"e7cb6abb-5f5b-4b5c-9c7f-73d40261989a\") "
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: E1205 21:19:48.796909    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: E1205 21:19:48.797118    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume podName:34ec55fb-3ca5-4d4e-9866-4b27a86f6004 nodeName:}" failed. No retries permitted until 2024-12-05 21:19:49.297095314 +0000 UTC m=+5.824890576 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume") pod "coredns-6d4b75cb6d-7mzxj" (UID: "34ec55fb-3ca5-4d4e-9866-4b27a86f6004") : object "kube-system"/"coredns" not registered
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: W1205 21:19:48.797465    1136 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a/volumes/kubernetes.io~projected/kube-api-access-txbzb: clearQuota called, but quotas disabled
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: W1205 21:19:48.797512    1136 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: I1205 21:19:48.798007    1136 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a-kube-api-access-txbzb" (OuterVolumeSpecName: "kube-api-access-txbzb") pod "e7cb6abb-5f5b-4b5c-9c7f-73d40261989a" (UID: "e7cb6abb-5f5b-4b5c-9c7f-73d40261989a"). InnerVolumeSpecName "kube-api-access-txbzb". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: I1205 21:19:48.798138    1136 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a-config-volume" (OuterVolumeSpecName: "config-volume") pod "e7cb6abb-5f5b-4b5c-9c7f-73d40261989a" (UID: "e7cb6abb-5f5b-4b5c-9c7f-73d40261989a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: I1205 21:19:48.896732    1136 reconciler.go:384] "Volume detached for volume \"kube-api-access-txbzb\" (UniqueName: \"kubernetes.io/projected/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a-kube-api-access-txbzb\") on node \"test-preload-455032\" DevicePath \"\""
	Dec 05 21:19:48 test-preload-455032 kubelet[1136]: I1205 21:19:48.896874    1136 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a-config-volume\") on node \"test-preload-455032\" DevicePath \"\""
	Dec 05 21:19:49 test-preload-455032 kubelet[1136]: E1205 21:19:49.300413    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 21:19:49 test-preload-455032 kubelet[1136]: E1205 21:19:49.300494    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume podName:34ec55fb-3ca5-4d4e-9866-4b27a86f6004 nodeName:}" failed. No retries permitted until 2024-12-05 21:19:50.3004787 +0000 UTC m=+6.828273980 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume") pod "coredns-6d4b75cb6d-7mzxj" (UID: "34ec55fb-3ca5-4d4e-9866-4b27a86f6004") : object "kube-system"/"coredns" not registered
	Dec 05 21:19:49 test-preload-455032 kubelet[1136]: I1205 21:19:49.751330    1136 scope.go:110] "RemoveContainer" containerID="20c43a8907816a12de0c0ce85b18cd0d6013cc36721ddd99e5d02e421b88c4dc"
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: E1205 21:19:50.306913    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: E1205 21:19:50.307017    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume podName:34ec55fb-3ca5-4d4e-9866-4b27a86f6004 nodeName:}" failed. No retries permitted until 2024-12-05 21:19:52.307000628 +0000 UTC m=+8.834795901 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume") pod "coredns-6d4b75cb6d-7mzxj" (UID: "34ec55fb-3ca5-4d4e-9866-4b27a86f6004") : object "kube-system"/"coredns" not registered
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: E1205 21:19:50.714675    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7mzxj" podUID=34ec55fb-3ca5-4d4e-9866-4b27a86f6004
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: I1205 21:19:50.757051    1136 scope.go:110] "RemoveContainer" containerID="b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45"
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: I1205 21:19:50.757486    1136 scope.go:110] "RemoveContainer" containerID="20c43a8907816a12de0c0ce85b18cd0d6013cc36721ddd99e5d02e421b88c4dc"
	Dec 05 21:19:50 test-preload-455032 kubelet[1136]: E1205 21:19:50.758474    1136 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e606d2c1-c57a-4a60-b224-a435994066bb)\"" pod="kube-system/storage-provisioner" podUID=e606d2c1-c57a-4a60-b224-a435994066bb
	Dec 05 21:19:51 test-preload-455032 kubelet[1136]: I1205 21:19:51.719907    1136 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=e7cb6abb-5f5b-4b5c-9c7f-73d40261989a path="/var/lib/kubelet/pods/e7cb6abb-5f5b-4b5c-9c7f-73d40261989a/volumes"
	Dec 05 21:19:51 test-preload-455032 kubelet[1136]: I1205 21:19:51.761514    1136 scope.go:110] "RemoveContainer" containerID="b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45"
	Dec 05 21:19:51 test-preload-455032 kubelet[1136]: E1205 21:19:51.761694    1136 pod_workers.go:951] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(e606d2c1-c57a-4a60-b224-a435994066bb)\"" pod="kube-system/storage-provisioner" podUID=e606d2c1-c57a-4a60-b224-a435994066bb
	Dec 05 21:19:52 test-preload-455032 kubelet[1136]: E1205 21:19:52.323332    1136 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Dec 05 21:19:52 test-preload-455032 kubelet[1136]: E1205 21:19:52.323436    1136 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume podName:34ec55fb-3ca5-4d4e-9866-4b27a86f6004 nodeName:}" failed. No retries permitted until 2024-12-05 21:19:56.323420956 +0000 UTC m=+12.851216229 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/34ec55fb-3ca5-4d4e-9866-4b27a86f6004-config-volume") pod "coredns-6d4b75cb6d-7mzxj" (UID: "34ec55fb-3ca5-4d4e-9866-4b27a86f6004") : object "kube-system"/"coredns" not registered
	Dec 05 21:19:52 test-preload-455032 kubelet[1136]: E1205 21:19:52.714183    1136 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-7mzxj" podUID=34ec55fb-3ca5-4d4e-9866-4b27a86f6004
	Dec 05 21:20:04 test-preload-455032 kubelet[1136]: I1205 21:20:04.714734    1136 scope.go:110] "RemoveContainer" containerID="b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45"
	
	
	==> storage-provisioner [b8ac0728ab7fd112b254fd348098ebd8412cbe56cb2c7ec0b1f4a7275a29ee45] <==
	I1205 21:19:49.851897       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 21:19:49.853530       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-455032 -n test-preload-455032
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-455032 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-455032" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-455032
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-455032: (1.012897257s)
--- FAIL: TestPreload (171.79s)

                                                
                                    
x
+
TestKubernetesUpgrade (395.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (4m41.656273877s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055769] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-055769" primary control-plane node in "kubernetes-upgrade-055769" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:24:27.646229  339185 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:24:27.646361  339185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:24:27.646371  339185 out.go:358] Setting ErrFile to fd 2...
	I1205 21:24:27.646375  339185 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:24:27.646556  339185 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:24:27.647166  339185 out.go:352] Setting JSON to false
	I1205 21:24:27.648259  339185 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14816,"bootTime":1733419052,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:24:27.648379  339185 start.go:139] virtualization: kvm guest
	I1205 21:24:27.650734  339185 out.go:177] * [kubernetes-upgrade-055769] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:24:27.652294  339185 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:24:27.652335  339185 notify.go:220] Checking for updates...
	I1205 21:24:27.655224  339185 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:24:27.656742  339185 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:24:27.658314  339185 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:24:27.659819  339185 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:24:27.661085  339185 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:24:27.662879  339185 config.go:182] Loaded profile config "NoKubernetes-019732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1205 21:24:27.663024  339185 config.go:182] Loaded profile config "cert-expiration-500745": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:24:27.663154  339185 config.go:182] Loaded profile config "running-upgrade-797218": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.1
	I1205 21:24:27.663333  339185 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:24:27.702626  339185 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:24:27.703972  339185 start.go:297] selected driver: kvm2
	I1205 21:24:27.703988  339185 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:24:27.704004  339185 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:24:27.704891  339185 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:24:27.705003  339185 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:24:27.723397  339185 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:24:27.723477  339185 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 21:24:27.723743  339185 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 21:24:27.723778  339185 cni.go:84] Creating CNI manager for ""
	I1205 21:24:27.723825  339185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:24:27.723837  339185 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 21:24:27.723896  339185 start.go:340] cluster config:
	{Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:24:27.724032  339185 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:24:27.726022  339185 out.go:177] * Starting "kubernetes-upgrade-055769" primary control-plane node in "kubernetes-upgrade-055769" cluster
	I1205 21:24:27.727586  339185 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:24:27.727646  339185 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:24:27.727665  339185 cache.go:56] Caching tarball of preloaded images
	I1205 21:24:27.727778  339185 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:24:27.727790  339185 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:24:27.727917  339185 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/config.json ...
	I1205 21:24:27.727943  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/config.json: {Name:mk5c1be59ceefa4897c190211fd6e4bafff15fca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:24:27.728127  339185 start.go:360] acquireMachinesLock for kubernetes-upgrade-055769: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:24:37.107106  339185 start.go:364] duration metric: took 9.378924887s to acquireMachinesLock for "kubernetes-upgrade-055769"
	I1205 21:24:37.107194  339185 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:24:37.107359  339185 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 21:24:37.109516  339185 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 21:24:37.109761  339185 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:24:37.109822  339185 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:24:37.131571  339185 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44701
	I1205 21:24:37.132053  339185 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:24:37.132691  339185 main.go:141] libmachine: Using API Version  1
	I1205 21:24:37.132716  339185 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:24:37.133112  339185 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:24:37.133413  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:24:37.133640  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:24:37.133778  339185 start.go:159] libmachine.API.Create for "kubernetes-upgrade-055769" (driver="kvm2")
	I1205 21:24:37.133822  339185 client.go:168] LocalClient.Create starting
	I1205 21:24:37.133866  339185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 21:24:37.133942  339185 main.go:141] libmachine: Decoding PEM data...
	I1205 21:24:37.133965  339185 main.go:141] libmachine: Parsing certificate...
	I1205 21:24:37.134042  339185 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 21:24:37.134072  339185 main.go:141] libmachine: Decoding PEM data...
	I1205 21:24:37.134088  339185 main.go:141] libmachine: Parsing certificate...
	I1205 21:24:37.134113  339185 main.go:141] libmachine: Running pre-create checks...
	I1205 21:24:37.134127  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .PreCreateCheck
	I1205 21:24:37.134608  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetConfigRaw
	I1205 21:24:37.135188  339185 main.go:141] libmachine: Creating machine...
	I1205 21:24:37.135209  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .Create
	I1205 21:24:37.135425  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Creating KVM machine...
	I1205 21:24:37.137030  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found existing default KVM network
	I1205 21:24:37.139168  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.138948  339278 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d7:5d:52} reservation:<nil>}
	I1205 21:24:37.140718  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.140617  339278 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a6b00}
	I1205 21:24:37.140877  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | created network xml: 
	I1205 21:24:37.140894  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | <network>
	I1205 21:24:37.140904  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   <name>mk-kubernetes-upgrade-055769</name>
	I1205 21:24:37.140911  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   <dns enable='no'/>
	I1205 21:24:37.140916  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   
	I1205 21:24:37.140923  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I1205 21:24:37.140935  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |     <dhcp>
	I1205 21:24:37.140946  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I1205 21:24:37.140957  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |     </dhcp>
	I1205 21:24:37.140966  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   </ip>
	I1205 21:24:37.140975  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG |   
	I1205 21:24:37.140984  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | </network>
	I1205 21:24:37.141000  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | 
	I1205 21:24:37.147348  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | trying to create private KVM network mk-kubernetes-upgrade-055769 192.168.50.0/24...
	I1205 21:24:37.245519  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | private KVM network mk-kubernetes-upgrade-055769 192.168.50.0/24 created
	I1205 21:24:37.245572  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.245455  339278 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:24:37.245663  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769 ...
	I1205 21:24:37.245708  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 21:24:37.245740  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 21:24:37.591674  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.591543  339278 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa...
	I1205 21:24:37.772193  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.772018  339278 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/kubernetes-upgrade-055769.rawdisk...
	I1205 21:24:37.772232  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Writing magic tar header
	I1205 21:24:37.772257  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Writing SSH key tar header
	I1205 21:24:37.772275  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:37.772224  339278 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769 ...
	I1205 21:24:37.772414  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769
	I1205 21:24:37.772439  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 21:24:37.772453  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:24:37.772467  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769 (perms=drwx------)
	I1205 21:24:37.772475  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 21:24:37.772489  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 21:24:37.772502  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home/jenkins
	I1205 21:24:37.772542  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Checking permissions on dir: /home
	I1205 21:24:37.772556  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Skipping /home - not owner
	I1205 21:24:37.772572  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 21:24:37.772596  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 21:24:37.772609  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 21:24:37.772622  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 21:24:37.772636  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 21:24:37.772685  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Creating domain...
	I1205 21:24:37.774199  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) define libvirt domain using xml: 
	I1205 21:24:37.774230  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) <domain type='kvm'>
	I1205 21:24:37.774282  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <name>kubernetes-upgrade-055769</name>
	I1205 21:24:37.774301  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <memory unit='MiB'>2200</memory>
	I1205 21:24:37.774325  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <vcpu>2</vcpu>
	I1205 21:24:37.774337  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <features>
	I1205 21:24:37.774347  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <acpi/>
	I1205 21:24:37.774361  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <apic/>
	I1205 21:24:37.774378  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <pae/>
	I1205 21:24:37.774390  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     
	I1205 21:24:37.774404  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   </features>
	I1205 21:24:37.774425  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <cpu mode='host-passthrough'>
	I1205 21:24:37.774459  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   
	I1205 21:24:37.774482  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   </cpu>
	I1205 21:24:37.774493  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <os>
	I1205 21:24:37.774504  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <type>hvm</type>
	I1205 21:24:37.774515  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <boot dev='cdrom'/>
	I1205 21:24:37.774523  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <boot dev='hd'/>
	I1205 21:24:37.774540  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <bootmenu enable='no'/>
	I1205 21:24:37.774550  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   </os>
	I1205 21:24:37.774560  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   <devices>
	I1205 21:24:37.774572  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <disk type='file' device='cdrom'>
	I1205 21:24:37.774590  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/boot2docker.iso'/>
	I1205 21:24:37.774599  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <target dev='hdc' bus='scsi'/>
	I1205 21:24:37.774611  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <readonly/>
	I1205 21:24:37.774622  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </disk>
	I1205 21:24:37.774632  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <disk type='file' device='disk'>
	I1205 21:24:37.774651  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 21:24:37.774671  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/kubernetes-upgrade-055769.rawdisk'/>
	I1205 21:24:37.774682  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <target dev='hda' bus='virtio'/>
	I1205 21:24:37.774693  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </disk>
	I1205 21:24:37.774705  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <interface type='network'>
	I1205 21:24:37.774718  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <source network='mk-kubernetes-upgrade-055769'/>
	I1205 21:24:37.774730  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <model type='virtio'/>
	I1205 21:24:37.774741  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </interface>
	I1205 21:24:37.774751  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <interface type='network'>
	I1205 21:24:37.774764  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <source network='default'/>
	I1205 21:24:37.774778  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <model type='virtio'/>
	I1205 21:24:37.774791  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </interface>
	I1205 21:24:37.774799  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <serial type='pty'>
	I1205 21:24:37.774812  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <target port='0'/>
	I1205 21:24:37.774819  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </serial>
	I1205 21:24:37.774832  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <console type='pty'>
	I1205 21:24:37.774844  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <target type='serial' port='0'/>
	I1205 21:24:37.774857  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </console>
	I1205 21:24:37.774871  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     <rng model='virtio'>
	I1205 21:24:37.774885  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)       <backend model='random'>/dev/random</backend>
	I1205 21:24:37.774895  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     </rng>
	I1205 21:24:37.774904  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     
	I1205 21:24:37.774915  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)     
	I1205 21:24:37.774924  339185 main.go:141] libmachine: (kubernetes-upgrade-055769)   </devices>
	I1205 21:24:37.774934  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) </domain>
	I1205 21:24:37.774950  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) 
	I1205 21:24:37.828200  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:8f:f7:d6 in network default
	I1205 21:24:37.828896  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring networks are active...
	I1205 21:24:37.828924  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:37.829880  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network default is active
	I1205 21:24:37.830516  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network mk-kubernetes-upgrade-055769 is active
	I1205 21:24:37.831218  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Getting domain xml...
	I1205 21:24:37.832166  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Creating domain...
	I1205 21:24:39.572609  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Waiting to get IP...
	I1205 21:24:39.573541  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:39.574073  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:39.574133  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:39.574054  339278 retry.go:31] will retry after 304.370134ms: waiting for machine to come up
	I1205 21:24:39.879659  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:39.880319  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:39.880350  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:39.880257  339278 retry.go:31] will retry after 315.797159ms: waiting for machine to come up
	I1205 21:24:40.197969  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.198570  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.198615  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:40.198501  339278 retry.go:31] will retry after 402.711342ms: waiting for machine to come up
	I1205 21:24:40.602999  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.603550  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.603583  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:40.603515  339278 retry.go:31] will retry after 368.664364ms: waiting for machine to come up
	I1205 21:24:40.974138  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.974637  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:40.974664  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:40.974574  339278 retry.go:31] will retry after 459.277594ms: waiting for machine to come up
	I1205 21:24:41.435156  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:41.435629  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:41.435658  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:41.435581  339278 retry.go:31] will retry after 581.199523ms: waiting for machine to come up
	I1205 21:24:42.514866  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:42.515453  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:42.515484  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:42.515399  339278 retry.go:31] will retry after 1.092613392s: waiting for machine to come up
	I1205 21:24:43.609823  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:43.610411  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:43.610446  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:43.610343  339278 retry.go:31] will retry after 962.577089ms: waiting for machine to come up
	I1205 21:24:44.574596  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:44.575245  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:44.575304  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:44.575193  339278 retry.go:31] will retry after 1.324290397s: waiting for machine to come up
	I1205 21:24:45.902144  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:45.902646  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:45.902687  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:45.902584  339278 retry.go:31] will retry after 2.045213035s: waiting for machine to come up
	I1205 21:24:47.949728  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:47.950198  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:47.950225  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:47.950135  339278 retry.go:31] will retry after 1.820010851s: waiting for machine to come up
	I1205 21:24:49.772188  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:49.772686  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:49.772717  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:49.772629  339278 retry.go:31] will retry after 3.22840826s: waiting for machine to come up
	I1205 21:24:53.002148  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:53.002585  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:53.002613  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:53.002525  339278 retry.go:31] will retry after 4.383135615s: waiting for machine to come up
	I1205 21:24:57.387596  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:24:57.387978  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:24:57.388017  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:24:57.387935  339278 retry.go:31] will retry after 5.027149855s: waiting for machine to come up
	I1205 21:25:02.416799  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.417540  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has current primary IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.417573  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Found IP for machine: 192.168.50.100
	I1205 21:25:02.417588  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Reserving static IP address...
	I1205 21:25:02.418141  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-055769", mac: "52:54:00:b3:72:db", ip: "192.168.50.100"} in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.513248  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Reserved static IP address: 192.168.50.100
	I1205 21:25:02.513280  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Waiting for SSH to be available...
	I1205 21:25:02.513325  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Getting to WaitForSSH function...
	I1205 21:25:02.516149  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.516576  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:02.516607  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.516698  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Using SSH client type: external
	I1205 21:25:02.516717  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa (-rw-------)
	I1205 21:25:02.516749  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:25:02.516761  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | About to run SSH command:
	I1205 21:25:02.516770  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | exit 0
	I1205 21:25:02.646169  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | SSH cmd err, output: <nil>: 
	I1205 21:25:02.646587  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) KVM machine creation complete!
	I1205 21:25:02.646878  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetConfigRaw
	I1205 21:25:02.648081  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:02.648349  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:02.648585  339185 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 21:25:02.648608  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetState
	I1205 21:25:02.650079  339185 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 21:25:02.650094  339185 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 21:25:02.650100  339185 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 21:25:02.650120  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:02.652457  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.652807  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:02.652841  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.652962  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:02.653194  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.653356  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.653498  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:02.653668  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:02.653872  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:02.653883  339185 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 21:25:02.765688  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:25:02.765717  339185 main.go:141] libmachine: Detecting the provisioner...
	I1205 21:25:02.765730  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:02.769052  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.769498  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:02.769533  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.769792  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:02.770113  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.770328  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.770510  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:02.770710  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:02.770937  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:02.770956  339185 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 21:25:02.883385  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 21:25:02.883486  339185 main.go:141] libmachine: found compatible host: buildroot
	I1205 21:25:02.883510  339185 main.go:141] libmachine: Provisioning with buildroot...
	I1205 21:25:02.883524  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:25:02.883845  339185 buildroot.go:166] provisioning hostname "kubernetes-upgrade-055769"
	I1205 21:25:02.883876  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:25:02.884110  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:02.887189  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.887654  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:02.887690  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:02.887971  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:02.888207  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.888387  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:02.888557  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:02.888718  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:02.888921  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:02.888936  339185 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-055769 && echo "kubernetes-upgrade-055769" | sudo tee /etc/hostname
	I1205 21:25:03.015422  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-055769
	
	I1205 21:25:03.015459  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.018238  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.018633  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.018665  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.018823  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.019021  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.019195  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.019373  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.019559  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:03.019749  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:03.019772  339185 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-055769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-055769/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-055769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:25:03.139455  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:25:03.139533  339185 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:25:03.139578  339185 buildroot.go:174] setting up certificates
	I1205 21:25:03.139594  339185 provision.go:84] configureAuth start
	I1205 21:25:03.139613  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:25:03.139907  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:25:03.142927  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.143324  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.143352  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.143564  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.145871  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.146218  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.146251  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.146388  339185 provision.go:143] copyHostCerts
	I1205 21:25:03.146462  339185 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:25:03.146488  339185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:25:03.146582  339185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:25:03.146712  339185 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:25:03.146726  339185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:25:03.146761  339185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:25:03.146839  339185 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:25:03.146848  339185 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:25:03.146879  339185 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:25:03.147010  339185 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-055769 san=[127.0.0.1 192.168.50.100 kubernetes-upgrade-055769 localhost minikube]
	I1205 21:25:03.261992  339185 provision.go:177] copyRemoteCerts
	I1205 21:25:03.262060  339185 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:25:03.262088  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.264657  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.264955  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.264992  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.265259  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.265504  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.265665  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.265831  339185 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:25:03.352412  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:25:03.376246  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 21:25:03.400029  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:25:03.427165  339185 provision.go:87] duration metric: took 287.54614ms to configureAuth
	I1205 21:25:03.427211  339185 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:25:03.427429  339185 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:25:03.427571  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.430516  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.430931  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.430963  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.431210  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.431424  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.431581  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.431695  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.431865  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:03.432050  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:03.432066  339185 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:25:03.668491  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:25:03.668523  339185 main.go:141] libmachine: Checking connection to Docker...
	I1205 21:25:03.668533  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetURL
	I1205 21:25:03.670013  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Using libvirt version 6000000
	I1205 21:25:03.672633  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.673002  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.673039  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.673161  339185 main.go:141] libmachine: Docker is up and running!
	I1205 21:25:03.673173  339185 main.go:141] libmachine: Reticulating splines...
	I1205 21:25:03.673180  339185 client.go:171] duration metric: took 26.539346016s to LocalClient.Create
	I1205 21:25:03.673204  339185 start.go:167] duration metric: took 26.539430508s to libmachine.API.Create "kubernetes-upgrade-055769"
	I1205 21:25:03.673216  339185 start.go:293] postStartSetup for "kubernetes-upgrade-055769" (driver="kvm2")
	I1205 21:25:03.673226  339185 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:25:03.673244  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:03.673558  339185 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:25:03.673587  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.676057  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.676404  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.676438  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.676599  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.676788  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.676933  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.677090  339185 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:25:03.760555  339185 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:25:03.764902  339185 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:25:03.764934  339185 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:25:03.765017  339185 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:25:03.765089  339185 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:25:03.765178  339185 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:25:03.775086  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:25:03.800632  339185 start.go:296] duration metric: took 127.400343ms for postStartSetup
	I1205 21:25:03.800702  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetConfigRaw
	I1205 21:25:03.801371  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:25:03.804353  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.804719  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.804749  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.805029  339185 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/config.json ...
	I1205 21:25:03.805242  339185 start.go:128] duration metric: took 26.697867249s to createHost
	I1205 21:25:03.805278  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.807735  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.808085  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.808124  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.808253  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.808429  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.808612  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.808752  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.808943  339185 main.go:141] libmachine: Using SSH client type: native
	I1205 21:25:03.809157  339185 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:25:03.809169  339185 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:25:03.922864  339185 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733433903.908548070
	
	I1205 21:25:03.922899  339185 fix.go:216] guest clock: 1733433903.908548070
	I1205 21:25:03.922908  339185 fix.go:229] Guest: 2024-12-05 21:25:03.90854807 +0000 UTC Remote: 2024-12-05 21:25:03.80525701 +0000 UTC m=+36.201565414 (delta=103.29106ms)
	I1205 21:25:03.922931  339185 fix.go:200] guest clock delta is within tolerance: 103.29106ms
	I1205 21:25:03.922938  339185 start.go:83] releasing machines lock for "kubernetes-upgrade-055769", held for 26.815783518s
	I1205 21:25:03.922973  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:03.923265  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:25:03.926488  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.926941  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.926980  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.927135  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:03.927750  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:03.927970  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:25:03.928065  339185 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:25:03.928126  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.928241  339185 ssh_runner.go:195] Run: cat /version.json
	I1205 21:25:03.928272  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:25:03.931222  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.931280  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.931687  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.931720  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.931787  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:03.931819  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:03.931918  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.932149  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.932151  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:25:03.932368  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.932381  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:25:03.932515  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:25:03.932528  339185 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:25:03.932658  339185 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:25:04.020075  339185 ssh_runner.go:195] Run: systemctl --version
	I1205 21:25:04.044020  339185 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:25:04.206272  339185 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:25:04.212457  339185 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:25:04.212551  339185 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:25:04.232310  339185 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:25:04.232343  339185 start.go:495] detecting cgroup driver to use...
	I1205 21:25:04.232437  339185 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:25:04.249342  339185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:25:04.265096  339185 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:25:04.265161  339185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:25:04.281161  339185 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:25:04.295980  339185 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:25:04.420944  339185 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:25:04.578411  339185 docker.go:233] disabling docker service ...
	I1205 21:25:04.578496  339185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:25:04.592960  339185 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:25:04.607408  339185 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:25:04.738266  339185 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:25:04.868620  339185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:25:04.885396  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:25:04.905059  339185 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:25:04.905130  339185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:25:04.917008  339185 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:25:04.917081  339185 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:25:04.928445  339185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:25:04.939837  339185 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:25:04.951659  339185 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:25:04.962981  339185 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:25:04.972966  339185 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:25:04.973031  339185 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:25:04.986162  339185 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:25:04.999805  339185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:25:05.119850  339185 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:25:05.220551  339185 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:25:05.220634  339185 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:25:05.225411  339185 start.go:563] Will wait 60s for crictl version
	I1205 21:25:05.225492  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:05.229399  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:25:05.273214  339185 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:25:05.273316  339185 ssh_runner.go:195] Run: crio --version
	I1205 21:25:05.304762  339185 ssh_runner.go:195] Run: crio --version
	I1205 21:25:05.336836  339185 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:25:05.338081  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:25:05.341568  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:05.341977  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:24:52 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:25:05.342016  339185 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:25:05.342342  339185 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:25:05.346824  339185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:25:05.359822  339185 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:25:05.359982  339185 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:25:05.360048  339185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:25:05.393027  339185 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:25:05.393117  339185 ssh_runner.go:195] Run: which lz4
	I1205 21:25:05.397715  339185 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:25:05.402385  339185 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:25:05.402431  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:25:07.067160  339185 crio.go:462] duration metric: took 1.669476985s to copy over tarball
	I1205 21:25:07.067261  339185 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:25:10.037737  339185 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.97041057s)
	I1205 21:25:10.037781  339185 crio.go:469] duration metric: took 2.970586448s to extract the tarball
	I1205 21:25:10.037791  339185 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:25:10.100855  339185 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:25:10.159428  339185 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:25:10.159465  339185 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:25:10.159534  339185 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:25:10.159545  339185 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.159591  339185 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.159600  339185 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.159620  339185 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.159625  339185 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:25:10.159629  339185 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.159565  339185 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.161682  339185 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.161715  339185 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.161722  339185 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:25:10.161733  339185 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.161783  339185 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.161984  339185 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:25:10.161995  339185 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.162116  339185 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.323678  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.326744  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.337789  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.338391  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.340631  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.358027  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.360245  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:25:10.448187  339185 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:25:10.448245  339185 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.448303  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.514779  339185 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:25:10.514998  339185 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.514933  339185 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:25:10.515069  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.515108  339185 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.515180  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.546517  339185 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:25:10.546580  339185 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.546637  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.546678  339185 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:25:10.546728  339185 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.546777  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.563397  339185 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:25:10.563435  339185 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:25:10.563460  339185 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.563485  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.563492  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.563496  339185 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:25:10.563556  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.563562  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.563501  339185 ssh_runner.go:195] Run: which crictl
	I1205 21:25:10.563585  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.563618  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.691706  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.691794  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.691733  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.691825  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.691762  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.700925  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:25:10.700927  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.825806  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:25:10.839790  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:10.839881  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:25:10.839881  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:25:10.869950  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:25:10.870071  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:25:10.874679  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:25:10.993877  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:25:11.015626  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:25:11.015645  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:25:11.015745  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:25:11.016396  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:25:11.025391  339185 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:25:11.036649  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:25:11.085037  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:25:11.085086  339185 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:25:11.089984  339185 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:25:11.235723  339185 cache_images.go:92] duration metric: took 1.076232095s to LoadCachedImages
	W1205 21:25:11.235851  339185 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1205 21:25:11.235870  339185 kubeadm.go:934] updating node { 192.168.50.100 8443 v1.20.0 crio true true} ...
	I1205 21:25:11.236019  339185 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-055769 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:25:11.236120  339185 ssh_runner.go:195] Run: crio config
	I1205 21:25:11.283775  339185 cni.go:84] Creating CNI manager for ""
	I1205 21:25:11.283797  339185 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:25:11.283812  339185 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:25:11.283833  339185 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-055769 NodeName:kubernetes-upgrade-055769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:25:11.283970  339185 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-055769"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.100
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:25:11.284037  339185 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:25:11.294640  339185 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:25:11.294725  339185 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:25:11.305098  339185 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I1205 21:25:11.326181  339185 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:25:11.347511  339185 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I1205 21:25:11.367352  339185 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1205 21:25:11.371767  339185 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.100	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:25:11.384893  339185 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:25:11.516672  339185 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:25:11.534683  339185 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769 for IP: 192.168.50.100
	I1205 21:25:11.534711  339185 certs.go:194] generating shared ca certs ...
	I1205 21:25:11.534730  339185 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:11.534892  339185 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:25:11.534931  339185 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:25:11.534941  339185 certs.go:256] generating profile certs ...
	I1205 21:25:11.534999  339185 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.key
	I1205 21:25:11.535012  339185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.crt with IP's: []
	I1205 21:25:11.916111  339185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.crt ...
	I1205 21:25:11.916158  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.crt: {Name:mke5397971cf5de34dadc0d07034be8018000581 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:11.935852  339185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.key ...
	I1205 21:25:11.935901  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.key: {Name:mk0556aec9870b61a08460e0c6eea1137e4b43cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:11.936060  339185 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key.e9e33142
	I1205 21:25:11.936085  339185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt.e9e33142 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.100]
	I1205 21:25:12.470805  339185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt.e9e33142 ...
	I1205 21:25:12.470852  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt.e9e33142: {Name:mk25ea16b0d023bbf1830e6feaa9ac6aff67afee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:12.471070  339185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key.e9e33142 ...
	I1205 21:25:12.471093  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key.e9e33142: {Name:mk7481cfde36f4ca1db8b7edb6d130eb266258e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:12.471220  339185 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt.e9e33142 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt
	I1205 21:25:12.471348  339185 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key.e9e33142 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key
	I1205 21:25:12.471452  339185 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key
	I1205 21:25:12.471482  339185 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.crt with IP's: []
	I1205 21:25:12.663070  339185 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.crt ...
	I1205 21:25:12.663113  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.crt: {Name:mkca9af503169fc4224da3a9c980488064f34f6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:12.684391  339185 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key ...
	I1205 21:25:12.684438  339185 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key: {Name:mk75a92be180dda4eb55a34509c659dafc860cea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:25:12.684833  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:25:12.684896  339185 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:25:12.684907  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:25:12.684935  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:25:12.684971  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:25:12.684999  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:25:12.685059  339185 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:25:12.685963  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:25:12.722210  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:25:12.756113  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:25:12.783342  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:25:12.812880  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 21:25:12.889136  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:25:12.914323  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:25:12.951879  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:25:12.980169  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:25:13.010355  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:25:13.035059  339185 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:25:13.059256  339185 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:25:13.077353  339185 ssh_runner.go:195] Run: openssl version
	I1205 21:25:13.085307  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:25:13.100311  339185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:25:13.105609  339185 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:25:13.105698  339185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:25:13.112125  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:25:13.123505  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:25:13.134795  339185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:25:13.139558  339185 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:25:13.139645  339185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:25:13.145269  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:25:13.164432  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:25:13.184575  339185 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:25:13.189807  339185 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:25:13.189893  339185 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:25:13.197008  339185 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:25:13.211806  339185 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:25:13.217590  339185 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 21:25:13.217667  339185 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:25:13.217785  339185 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:25:13.217854  339185 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:25:13.280679  339185 cri.go:89] found id: ""
	I1205 21:25:13.280770  339185 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:25:13.292133  339185 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:25:13.303485  339185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:25:13.314563  339185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:25:13.314586  339185 kubeadm.go:157] found existing configuration files:
	
	I1205 21:25:13.314639  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:25:13.325730  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:25:13.325819  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:25:13.337281  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:25:13.347584  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:25:13.347651  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:25:13.358195  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:25:13.367948  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:25:13.368014  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:25:13.379604  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:25:13.390034  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:25:13.390118  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:25:13.400835  339185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:25:13.693308  339185 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:27:11.094820  339185 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:27:11.094951  339185 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:27:11.096336  339185 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:27:11.096403  339185 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:27:11.096475  339185 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:27:11.096609  339185 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:27:11.096744  339185 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:27:11.096834  339185 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:27:11.098430  339185 out.go:235]   - Generating certificates and keys ...
	I1205 21:27:11.098525  339185 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:27:11.098613  339185 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:27:11.098708  339185 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 21:27:11.098778  339185 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 21:27:11.098866  339185 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 21:27:11.098918  339185 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 21:27:11.098964  339185 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 21:27:11.099150  339185 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	I1205 21:27:11.099228  339185 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 21:27:11.099399  339185 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	I1205 21:27:11.099478  339185 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 21:27:11.099573  339185 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 21:27:11.099650  339185 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 21:27:11.099728  339185 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:27:11.099805  339185 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:27:11.099883  339185 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:27:11.099975  339185 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:27:11.100052  339185 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:27:11.100209  339185 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:27:11.100346  339185 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:27:11.100413  339185 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:27:11.100515  339185 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:27:11.101847  339185 out.go:235]   - Booting up control plane ...
	I1205 21:27:11.102004  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:27:11.102124  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:27:11.102219  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:27:11.102324  339185 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:27:11.102504  339185 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:27:11.102584  339185 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:27:11.102669  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:11.102896  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:11.102959  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:11.103111  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:11.103173  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:11.103338  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:11.103418  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:11.103634  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:11.103735  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:11.103957  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:11.103966  339185 kubeadm.go:310] 
	I1205 21:27:11.104039  339185 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:27:11.104111  339185 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:27:11.104117  339185 kubeadm.go:310] 
	I1205 21:27:11.104161  339185 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:27:11.104208  339185 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:27:11.104358  339185 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:27:11.104373  339185 kubeadm.go:310] 
	I1205 21:27:11.104501  339185 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:27:11.104731  339185 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:27:11.104786  339185 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:27:11.104793  339185 kubeadm.go:310] 
	I1205 21:27:11.104926  339185 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:27:11.105047  339185 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:27:11.105055  339185 kubeadm.go:310] 
	I1205 21:27:11.105190  339185 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:27:11.105292  339185 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:27:11.105354  339185 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:27:11.105449  339185 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	W1205 21:27:11.105624  339185 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-055769 localhost] and IPs [192.168.50.100 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:27:11.105683  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:27:11.106297  339185 kubeadm.go:310] 
	I1205 21:27:12.121199  339185 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.015484812s)
	I1205 21:27:12.121290  339185 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:27:12.137206  339185 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:27:12.148556  339185 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:27:12.148578  339185 kubeadm.go:157] found existing configuration files:
	
	I1205 21:27:12.148641  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:27:12.159634  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:27:12.159695  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:27:12.170879  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:27:12.181770  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:27:12.181836  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:27:12.192894  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:27:12.203819  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:27:12.203899  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:27:12.214020  339185 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:27:12.223760  339185 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:27:12.223832  339185 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:27:12.235092  339185 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:27:12.308668  339185 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:27:12.308795  339185 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:27:12.470430  339185 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:27:12.470620  339185 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:27:12.470768  339185 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:27:12.683958  339185 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:27:12.685742  339185 out.go:235]   - Generating certificates and keys ...
	I1205 21:27:12.685850  339185 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:27:12.685947  339185 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:27:12.686046  339185 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:27:12.686120  339185 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:27:12.686204  339185 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:27:12.686271  339185 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:27:12.687287  339185 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:27:12.687374  339185 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:27:12.687461  339185 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:27:12.687549  339185 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:27:12.687592  339185 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:27:12.687654  339185 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:27:12.960765  339185 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:27:13.062265  339185 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:27:13.110622  339185 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:27:13.352161  339185 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:27:13.367678  339185 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:27:13.368898  339185 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:27:13.368988  339185 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:27:13.525509  339185 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:27:13.528233  339185 out.go:235]   - Booting up control plane ...
	I1205 21:27:13.528384  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:27:13.540290  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:27:13.541808  339185 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:27:13.542947  339185 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:27:13.546402  339185 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:27:53.547699  339185 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:27:53.548230  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:53.548536  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:27:58.548760  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:27:58.548978  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:28:08.549413  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:28:08.549745  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:28:28.550623  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:28:28.550886  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:29:08.550864  339185 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:29:08.551113  339185 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:29:08.551134  339185 kubeadm.go:310] 
	I1205 21:29:08.551215  339185 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:29:08.551299  339185 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:29:08.551320  339185 kubeadm.go:310] 
	I1205 21:29:08.551360  339185 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:29:08.551404  339185 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:29:08.551552  339185 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:29:08.551565  339185 kubeadm.go:310] 
	I1205 21:29:08.551659  339185 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:29:08.551689  339185 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:29:08.551714  339185 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:29:08.551722  339185 kubeadm.go:310] 
	I1205 21:29:08.551801  339185 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:29:08.551863  339185 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:29:08.551867  339185 kubeadm.go:310] 
	I1205 21:29:08.551948  339185 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:29:08.552015  339185 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:29:08.552072  339185 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:29:08.552132  339185 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:29:08.552138  339185 kubeadm.go:310] 
	I1205 21:29:08.553279  339185 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:29:08.553417  339185 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:29:08.553572  339185 kubeadm.go:394] duration metric: took 3m55.335911954s to StartCluster
	I1205 21:29:08.553629  339185 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:29:08.553705  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:29:08.553799  339185 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:29:08.594998  339185 cri.go:89] found id: ""
	I1205 21:29:08.595030  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.595040  339185 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:29:08.595046  339185 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:29:08.595106  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:29:08.642805  339185 cri.go:89] found id: ""
	I1205 21:29:08.642845  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.642858  339185 logs.go:284] No container was found matching "etcd"
	I1205 21:29:08.642867  339185 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:29:08.642942  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:29:08.684366  339185 cri.go:89] found id: ""
	I1205 21:29:08.684396  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.684406  339185 logs.go:284] No container was found matching "coredns"
	I1205 21:29:08.684413  339185 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:29:08.684487  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:29:08.726867  339185 cri.go:89] found id: ""
	I1205 21:29:08.726902  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.726915  339185 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:29:08.726924  339185 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:29:08.727000  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:29:08.765830  339185 cri.go:89] found id: ""
	I1205 21:29:08.765854  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.765867  339185 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:29:08.765873  339185 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:29:08.765960  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:29:08.803468  339185 cri.go:89] found id: ""
	I1205 21:29:08.803505  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.803525  339185 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:29:08.803534  339185 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:29:08.803601  339185 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:29:08.840508  339185 cri.go:89] found id: ""
	I1205 21:29:08.840550  339185 logs.go:282] 0 containers: []
	W1205 21:29:08.840569  339185 logs.go:284] No container was found matching "kindnet"
	I1205 21:29:08.840587  339185 logs.go:123] Gathering logs for kubelet ...
	I1205 21:29:08.840608  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:29:08.896781  339185 logs.go:123] Gathering logs for dmesg ...
	I1205 21:29:08.896829  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:29:08.912487  339185 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:29:08.912544  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:29:09.063499  339185 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:29:09.063525  339185 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:29:09.063542  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:29:09.186322  339185 logs.go:123] Gathering logs for container status ...
	I1205 21:29:09.186373  339185 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 21:29:09.238602  339185 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:29:09.238660  339185 out.go:270] * 
	* 
	W1205 21:29:09.238719  339185 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:29:09.238740  339185 out.go:270] * 
	* 
	W1205 21:29:09.239590  339185 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:29:09.242982  339185 out.go:201] 
	W1205 21:29:09.244120  339185 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:29:09.244165  339185 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:29:09.244191  339185 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:29:09.245723  339185 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-055769
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-055769: (1.500783326s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-055769 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-055769 status --format={{.Host}}: exit status 7 (83.236511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (36.139922175s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-055769 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (103.037188ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-055769] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-055769
	    minikube start -p kubernetes-upgrade-055769 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0557692 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-055769 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-055769 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m11.804027076s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-12-05 21:30:59.029055692 +0000 UTC m=+4301.003648352
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-055769 -n kubernetes-upgrade-055769
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-055769 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-055769 logs -n 25: (2.008333992s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | systemctl status docker --all                        |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | systemctl cat docker                                 |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo docker                        | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | systemctl status cri-docker                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | systemctl cat cri-docker                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | systemctl status containerd                          |                           |         |         |                     |                     |
	|         | --all --full --no-pager                              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | systemctl cat containerd                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo cat                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                         |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | containerd config dump                               |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:30 UTC |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | systemctl status crio --all                          |                           |         |         |                     |                     |
	|         | --full --no-pager                                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo                               | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | systemctl cat crio --no-pager                        |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo find                          | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 sudo crio                          | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p kindnet-279893                                    | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	| start   | -p enable-default-cni-279893                         | enable-default-cni-279893 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --memory=3072                                        |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                            |                           |         |         |                     |                     |
	|         | --driver=kvm2                                        |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p calico-279893 pgrep -a                            | calico-279893             | jenkins | v1.34.0 | 05 Dec 24 21:30 UTC | 05 Dec 24 21:30 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:29:49
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:29:49.583619  346676 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:29:49.583754  346676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:49.583764  346676 out.go:358] Setting ErrFile to fd 2...
	I1205 21:29:49.583768  346676 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:49.583991  346676 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:29:49.584654  346676 out.go:352] Setting JSON to false
	I1205 21:29:49.586017  346676 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15138,"bootTime":1733419052,"procs":295,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:29:49.586150  346676 start.go:139] virtualization: kvm guest
	I1205 21:29:49.588294  346676 out.go:177] * [enable-default-cni-279893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:29:49.590065  346676 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:29:49.590122  346676 notify.go:220] Checking for updates...
	I1205 21:29:49.592590  346676 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:29:49.593942  346676 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:29:49.595327  346676 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:29:49.596669  346676 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:29:49.597997  346676 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:29:49.599712  346676 config.go:182] Loaded profile config "calico-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:49.599845  346676 config.go:182] Loaded profile config "custom-flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:49.599951  346676 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:49.600071  346676 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:29:49.641712  346676 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:29:49.643116  346676 start.go:297] selected driver: kvm2
	I1205 21:29:49.643137  346676 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:29:49.643150  346676 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:29:49.644554  346676 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:49.644651  346676 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:29:49.662179  346676 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:29:49.662240  346676 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	E1205 21:29:49.662496  346676 start_flags.go:464] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I1205 21:29:49.662520  346676 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:49.662560  346676 cni.go:84] Creating CNI manager for "bridge"
	I1205 21:29:49.662566  346676 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 21:29:49.662622  346676 start.go:340] cluster config:
	{Name:enable-default-cni-279893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPat
h: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:29:49.662780  346676 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:49.664584  346676 out.go:177] * Starting "enable-default-cni-279893" primary control-plane node in "enable-default-cni-279893" cluster
	I1205 21:29:47.396261  346445 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:29:47.396317  346445 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:29:47.396332  346445 cache.go:56] Caching tarball of preloaded images
	I1205 21:29:47.396451  346445 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:29:47.396467  346445 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:29:47.396594  346445 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/config.json ...
	I1205 21:29:47.396896  346445 start.go:360] acquireMachinesLock for kubernetes-upgrade-055769: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:29:51.266156  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:51.266817  344606 main.go:141] libmachine: (calico-279893) DBG | unable to find current IP address of domain calico-279893 in network mk-calico-279893
	I1205 21:29:51.266837  344606 main.go:141] libmachine: (calico-279893) DBG | I1205 21:29:51.266761  344819 retry.go:31] will retry after 3.824987721s: waiting for machine to come up
	I1205 21:29:49.665876  346676 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:29:49.665999  346676 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:29:49.666014  346676 cache.go:56] Caching tarball of preloaded images
	I1205 21:29:49.666108  346676 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:29:49.666121  346676 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:29:49.666222  346676 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/config.json ...
	I1205 21:29:49.666241  346676 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/config.json: {Name:mk94bfac3a279c305049d7d9d8dd157ea72982a8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:49.666384  346676 start.go:360] acquireMachinesLock for enable-default-cni-279893: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:29:56.630592  345195 start.go:364] duration metric: took 21.610937265s to acquireMachinesLock for "custom-flannel-279893"
	I1205 21:29:56.630687  345195 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:29:56.630853  345195 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 21:29:55.095887  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.096518  344606 main.go:141] libmachine: (calico-279893) Found IP for machine: 192.168.39.206
	I1205 21:29:55.096543  344606 main.go:141] libmachine: (calico-279893) Reserving static IP address...
	I1205 21:29:55.096557  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has current primary IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.097038  344606 main.go:141] libmachine: (calico-279893) DBG | unable to find host DHCP lease matching {name: "calico-279893", mac: "52:54:00:a8:9d:0f", ip: "192.168.39.206"} in network mk-calico-279893
	I1205 21:29:55.187071  344606 main.go:141] libmachine: (calico-279893) DBG | Getting to WaitForSSH function...
	I1205 21:29:55.187105  344606 main.go:141] libmachine: (calico-279893) Reserved static IP address: 192.168.39.206
	I1205 21:29:55.187118  344606 main.go:141] libmachine: (calico-279893) Waiting for SSH to be available...
	I1205 21:29:55.190298  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.190739  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:minikube Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.190777  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.190920  344606 main.go:141] libmachine: (calico-279893) DBG | Using SSH client type: external
	I1205 21:29:55.190937  344606 main.go:141] libmachine: (calico-279893) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa (-rw-------)
	I1205 21:29:55.190981  344606 main.go:141] libmachine: (calico-279893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.206 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:29:55.190997  344606 main.go:141] libmachine: (calico-279893) DBG | About to run SSH command:
	I1205 21:29:55.191028  344606 main.go:141] libmachine: (calico-279893) DBG | exit 0
	I1205 21:29:55.322397  344606 main.go:141] libmachine: (calico-279893) DBG | SSH cmd err, output: <nil>: 
	I1205 21:29:55.322704  344606 main.go:141] libmachine: (calico-279893) KVM machine creation complete!
	I1205 21:29:55.323008  344606 main.go:141] libmachine: (calico-279893) Calling .GetConfigRaw
	I1205 21:29:55.323653  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:55.323916  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:55.324113  344606 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 21:29:55.324134  344606 main.go:141] libmachine: (calico-279893) Calling .GetState
	I1205 21:29:55.325742  344606 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 21:29:55.325756  344606 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 21:29:55.325761  344606 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 21:29:55.325766  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.328140  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.328560  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.328619  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.328727  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:55.328938  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.329126  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.329295  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:55.329468  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:55.329719  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:55.329733  344606 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 21:29:55.441381  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:29:55.441408  344606 main.go:141] libmachine: Detecting the provisioner...
	I1205 21:29:55.441420  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.444363  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.444763  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.444797  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.444975  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:55.445199  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.445372  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.445479  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:55.445631  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:55.445845  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:55.445860  344606 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 21:29:55.555015  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 21:29:55.555107  344606 main.go:141] libmachine: found compatible host: buildroot
	I1205 21:29:55.555119  344606 main.go:141] libmachine: Provisioning with buildroot...
	I1205 21:29:55.555127  344606 main.go:141] libmachine: (calico-279893) Calling .GetMachineName
	I1205 21:29:55.555417  344606 buildroot.go:166] provisioning hostname "calico-279893"
	I1205 21:29:55.555437  344606 main.go:141] libmachine: (calico-279893) Calling .GetMachineName
	I1205 21:29:55.555663  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.558427  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.558846  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.558873  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.559032  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:55.559244  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.559394  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.559516  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:55.559666  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:55.559876  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:55.559890  344606 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-279893 && echo "calico-279893" | sudo tee /etc/hostname
	I1205 21:29:55.683661  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-279893
	
	I1205 21:29:55.683692  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.686635  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.687008  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.687038  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.687182  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:55.687374  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.687607  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.687765  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:55.687925  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:55.688161  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:55.688180  344606 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-279893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-279893/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-279893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:29:55.807559  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:29:55.807604  344606 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:29:55.807630  344606 buildroot.go:174] setting up certificates
	I1205 21:29:55.807666  344606 provision.go:84] configureAuth start
	I1205 21:29:55.807686  344606 main.go:141] libmachine: (calico-279893) Calling .GetMachineName
	I1205 21:29:55.808000  344606 main.go:141] libmachine: (calico-279893) Calling .GetIP
	I1205 21:29:55.810986  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.811390  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.811456  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.811683  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.814258  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.814590  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.814617  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.814720  344606 provision.go:143] copyHostCerts
	I1205 21:29:55.814779  344606 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:29:55.814799  344606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:29:55.814863  344606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:29:55.814958  344606 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:29:55.814966  344606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:29:55.814985  344606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:29:55.815034  344606 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:29:55.815041  344606 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:29:55.815057  344606 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:29:55.815105  344606 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.calico-279893 san=[127.0.0.1 192.168.39.206 calico-279893 localhost minikube]
	I1205 21:29:55.985576  344606 provision.go:177] copyRemoteCerts
	I1205 21:29:55.985666  344606 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:29:55.985702  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:55.988510  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.988786  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:55.988827  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:55.989110  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:55.989334  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:55.989537  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:55.989674  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:29:56.076838  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:29:56.103333  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 21:29:56.127742  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:29:56.151525  344606 provision.go:87] duration metric: took 343.837145ms to configureAuth
	I1205 21:29:56.151563  344606 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:29:56.151758  344606 config.go:182] Loaded profile config "calico-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:56.151899  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:56.154549  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.154962  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.154995  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.155240  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:56.155464  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.155632  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.155780  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:56.155948  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:56.156159  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:56.156175  344606 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:29:56.374737  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:29:56.374782  344606 main.go:141] libmachine: Checking connection to Docker...
	I1205 21:29:56.374796  344606 main.go:141] libmachine: (calico-279893) Calling .GetURL
	I1205 21:29:56.376209  344606 main.go:141] libmachine: (calico-279893) DBG | Using libvirt version 6000000
	I1205 21:29:56.378510  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.378867  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.378903  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.379069  344606 main.go:141] libmachine: Docker is up and running!
	I1205 21:29:56.379084  344606 main.go:141] libmachine: Reticulating splines...
	I1205 21:29:56.379091  344606 client.go:171] duration metric: took 25.612647671s to LocalClient.Create
	I1205 21:29:56.379113  344606 start.go:167] duration metric: took 25.61272769s to libmachine.API.Create "calico-279893"
	I1205 21:29:56.379123  344606 start.go:293] postStartSetup for "calico-279893" (driver="kvm2")
	I1205 21:29:56.379133  344606 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:29:56.379154  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:56.379501  344606 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:29:56.379533  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:56.381853  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.382158  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.382190  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.382357  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:56.382571  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.382744  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:56.382889  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:29:56.469914  344606 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:29:56.474404  344606 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:29:56.474442  344606 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:29:56.474513  344606 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:29:56.474724  344606 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:29:56.474897  344606 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:29:56.486583  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:29:56.511008  344606 start.go:296] duration metric: took 131.866662ms for postStartSetup
	I1205 21:29:56.511083  344606 main.go:141] libmachine: (calico-279893) Calling .GetConfigRaw
	I1205 21:29:56.511717  344606 main.go:141] libmachine: (calico-279893) Calling .GetIP
	I1205 21:29:56.514595  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.514955  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.514988  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.515305  344606 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/config.json ...
	I1205 21:29:56.515499  344606 start.go:128] duration metric: took 25.776034321s to createHost
	I1205 21:29:56.515524  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:56.517679  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.518090  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.518113  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.518379  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:56.518578  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.518746  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.518847  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:56.519015  344606 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:56.519196  344606 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.206 22 <nil> <nil>}
	I1205 21:29:56.519214  344606 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:29:56.630396  344606 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434196.601677878
	
	I1205 21:29:56.630423  344606 fix.go:216] guest clock: 1733434196.601677878
	I1205 21:29:56.630433  344606 fix.go:229] Guest: 2024-12-05 21:29:56.601677878 +0000 UTC Remote: 2024-12-05 21:29:56.515512517 +0000 UTC m=+43.221132975 (delta=86.165361ms)
	I1205 21:29:56.630461  344606 fix.go:200] guest clock delta is within tolerance: 86.165361ms
	I1205 21:29:56.630469  344606 start.go:83] releasing machines lock for "calico-279893", held for 25.891224584s
	I1205 21:29:56.630498  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:56.630810  344606 main.go:141] libmachine: (calico-279893) Calling .GetIP
	I1205 21:29:56.633597  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.633956  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.633991  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.634246  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:56.634725  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:56.634932  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:29:56.635012  344606 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:29:56.635059  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:56.635152  344606 ssh_runner.go:195] Run: cat /version.json
	I1205 21:29:56.635180  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:29:56.638310  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.638341  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.638727  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.638761  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:56.638788  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.638946  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:56.639018  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:56.639130  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:29:56.639203  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.639312  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:29:56.639375  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:56.639470  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:29:56.639568  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:29:56.639605  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:29:56.744382  344606 ssh_runner.go:195] Run: systemctl --version
	I1205 21:29:56.751495  344606 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:29:56.920413  344606 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:29:56.926555  344606 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:29:56.926649  344606 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:29:56.943048  344606 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:29:56.943071  344606 start.go:495] detecting cgroup driver to use...
	I1205 21:29:56.943137  344606 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:29:56.961106  344606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:29:56.975489  344606 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:29:56.975573  344606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:29:56.990735  344606 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:29:57.005348  344606 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:29:57.127743  344606 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:29:57.294416  344606 docker.go:233] disabling docker service ...
	I1205 21:29:57.294484  344606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:29:57.311753  344606 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:29:57.325379  344606 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:29:57.467703  344606 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:29:57.612412  344606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:29:57.627195  344606 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:29:57.646773  344606 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:29:57.646857  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.657981  344606 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:29:57.658068  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.668782  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.679883  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.690117  344606 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:29:57.700647  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.710961  344606 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.731079  344606 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:29:57.742065  344606 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:29:57.753210  344606 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:29:57.753271  344606 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:29:57.766524  344606 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:29:57.776970  344606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:29:57.882980  344606 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:29:57.985427  344606 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:29:57.985526  344606 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:29:57.990301  344606 start.go:563] Will wait 60s for crictl version
	I1205 21:29:57.990358  344606 ssh_runner.go:195] Run: which crictl
	I1205 21:29:57.994207  344606 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:29:58.033863  344606 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:29:58.033977  344606 ssh_runner.go:195] Run: crio --version
	I1205 21:29:58.063203  344606 ssh_runner.go:195] Run: crio --version
	I1205 21:29:58.094010  344606 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:29:58.095312  344606 main.go:141] libmachine: (calico-279893) Calling .GetIP
	I1205 21:29:58.098723  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:58.099201  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:29:58.099223  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:29:58.099529  344606 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:29:58.103688  344606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:29:58.118451  344606 kubeadm.go:883] updating cluster {Name:calico-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.
2 ClusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:29:58.118699  344606 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:29:58.118779  344606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:29:58.153649  344606 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:29:58.153717  344606 ssh_runner.go:195] Run: which lz4
	I1205 21:29:58.157786  344606 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:29:58.162006  344606 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:29:58.162040  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:29:56.632671  345195 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 21:29:56.632904  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:29:56.632949  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:29:56.650985  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36477
	I1205 21:29:56.651558  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:29:56.652124  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:29:56.652176  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:29:56.652546  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:29:56.652757  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetMachineName
	I1205 21:29:56.652914  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:29:56.653077  345195 start.go:159] libmachine.API.Create for "custom-flannel-279893" (driver="kvm2")
	I1205 21:29:56.653109  345195 client.go:168] LocalClient.Create starting
	I1205 21:29:56.653150  345195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 21:29:56.653195  345195 main.go:141] libmachine: Decoding PEM data...
	I1205 21:29:56.653216  345195 main.go:141] libmachine: Parsing certificate...
	I1205 21:29:56.653303  345195 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 21:29:56.653330  345195 main.go:141] libmachine: Decoding PEM data...
	I1205 21:29:56.653352  345195 main.go:141] libmachine: Parsing certificate...
	I1205 21:29:56.653378  345195 main.go:141] libmachine: Running pre-create checks...
	I1205 21:29:56.653391  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .PreCreateCheck
	I1205 21:29:56.653870  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetConfigRaw
	I1205 21:29:56.654336  345195 main.go:141] libmachine: Creating machine...
	I1205 21:29:56.654357  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Create
	I1205 21:29:56.654527  345195 main.go:141] libmachine: (custom-flannel-279893) Creating KVM machine...
	I1205 21:29:56.655887  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found existing default KVM network
	I1205 21:29:56.657484  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:56.657311  346758 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:d0:45} reservation:<nil>}
	I1205 21:29:56.658316  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:56.658208  346758 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:33:7e} reservation:<nil>}
	I1205 21:29:56.659303  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:56.659185  346758 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000307060}
	I1205 21:29:56.659344  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | created network xml: 
	I1205 21:29:56.659361  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | <network>
	I1205 21:29:56.659371  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   <name>mk-custom-flannel-279893</name>
	I1205 21:29:56.659389  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   <dns enable='no'/>
	I1205 21:29:56.659407  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   
	I1205 21:29:56.659423  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1205 21:29:56.659436  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |     <dhcp>
	I1205 21:29:56.659456  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1205 21:29:56.659469  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |     </dhcp>
	I1205 21:29:56.659483  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   </ip>
	I1205 21:29:56.659506  345195 main.go:141] libmachine: (custom-flannel-279893) DBG |   
	I1205 21:29:56.659521  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | </network>
	I1205 21:29:56.659537  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | 
	I1205 21:29:56.664674  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | trying to create private KVM network mk-custom-flannel-279893 192.168.61.0/24...
	I1205 21:29:56.742337  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | private KVM network mk-custom-flannel-279893 192.168.61.0/24 created
	I1205 21:29:56.742377  345195 main.go:141] libmachine: (custom-flannel-279893) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893 ...
	I1205 21:29:56.742393  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:56.742258  346758 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:29:56.742414  345195 main.go:141] libmachine: (custom-flannel-279893) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 21:29:56.742474  345195 main.go:141] libmachine: (custom-flannel-279893) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 21:29:57.043393  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:57.043232  346758 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa...
	I1205 21:29:57.240307  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:57.240144  346758 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/custom-flannel-279893.rawdisk...
	I1205 21:29:57.240340  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Writing magic tar header
	I1205 21:29:57.240369  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Writing SSH key tar header
	I1205 21:29:57.240386  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:57.240287  346758 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893 ...
	I1205 21:29:57.240402  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893
	I1205 21:29:57.240513  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893 (perms=drwx------)
	I1205 21:29:57.240538  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 21:29:57.240545  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 21:29:57.240561  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:29:57.240571  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 21:29:57.240600  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 21:29:57.240610  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 21:29:57.240616  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 21:29:57.240624  345195 main.go:141] libmachine: (custom-flannel-279893) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 21:29:57.240630  345195 main.go:141] libmachine: (custom-flannel-279893) Creating domain...
	I1205 21:29:57.240641  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 21:29:57.240654  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home/jenkins
	I1205 21:29:57.240662  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Checking permissions on dir: /home
	I1205 21:29:57.240670  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Skipping /home - not owner
	I1205 21:29:57.241843  345195 main.go:141] libmachine: (custom-flannel-279893) define libvirt domain using xml: 
	I1205 21:29:57.241875  345195 main.go:141] libmachine: (custom-flannel-279893) <domain type='kvm'>
	I1205 21:29:57.241888  345195 main.go:141] libmachine: (custom-flannel-279893)   <name>custom-flannel-279893</name>
	I1205 21:29:57.241896  345195 main.go:141] libmachine: (custom-flannel-279893)   <memory unit='MiB'>3072</memory>
	I1205 21:29:57.241949  345195 main.go:141] libmachine: (custom-flannel-279893)   <vcpu>2</vcpu>
	I1205 21:29:57.241973  345195 main.go:141] libmachine: (custom-flannel-279893)   <features>
	I1205 21:29:57.241979  345195 main.go:141] libmachine: (custom-flannel-279893)     <acpi/>
	I1205 21:29:57.241987  345195 main.go:141] libmachine: (custom-flannel-279893)     <apic/>
	I1205 21:29:57.241992  345195 main.go:141] libmachine: (custom-flannel-279893)     <pae/>
	I1205 21:29:57.241998  345195 main.go:141] libmachine: (custom-flannel-279893)     
	I1205 21:29:57.242007  345195 main.go:141] libmachine: (custom-flannel-279893)   </features>
	I1205 21:29:57.242019  345195 main.go:141] libmachine: (custom-flannel-279893)   <cpu mode='host-passthrough'>
	I1205 21:29:57.242028  345195 main.go:141] libmachine: (custom-flannel-279893)   
	I1205 21:29:57.242034  345195 main.go:141] libmachine: (custom-flannel-279893)   </cpu>
	I1205 21:29:57.242046  345195 main.go:141] libmachine: (custom-flannel-279893)   <os>
	I1205 21:29:57.242053  345195 main.go:141] libmachine: (custom-flannel-279893)     <type>hvm</type>
	I1205 21:29:57.242061  345195 main.go:141] libmachine: (custom-flannel-279893)     <boot dev='cdrom'/>
	I1205 21:29:57.242065  345195 main.go:141] libmachine: (custom-flannel-279893)     <boot dev='hd'/>
	I1205 21:29:57.242071  345195 main.go:141] libmachine: (custom-flannel-279893)     <bootmenu enable='no'/>
	I1205 21:29:57.242081  345195 main.go:141] libmachine: (custom-flannel-279893)   </os>
	I1205 21:29:57.242090  345195 main.go:141] libmachine: (custom-flannel-279893)   <devices>
	I1205 21:29:57.242104  345195 main.go:141] libmachine: (custom-flannel-279893)     <disk type='file' device='cdrom'>
	I1205 21:29:57.242123  345195 main.go:141] libmachine: (custom-flannel-279893)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/boot2docker.iso'/>
	I1205 21:29:57.242136  345195 main.go:141] libmachine: (custom-flannel-279893)       <target dev='hdc' bus='scsi'/>
	I1205 21:29:57.242153  345195 main.go:141] libmachine: (custom-flannel-279893)       <readonly/>
	I1205 21:29:57.242163  345195 main.go:141] libmachine: (custom-flannel-279893)     </disk>
	I1205 21:29:57.242172  345195 main.go:141] libmachine: (custom-flannel-279893)     <disk type='file' device='disk'>
	I1205 21:29:57.242190  345195 main.go:141] libmachine: (custom-flannel-279893)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 21:29:57.242208  345195 main.go:141] libmachine: (custom-flannel-279893)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/custom-flannel-279893.rawdisk'/>
	I1205 21:29:57.242221  345195 main.go:141] libmachine: (custom-flannel-279893)       <target dev='hda' bus='virtio'/>
	I1205 21:29:57.242232  345195 main.go:141] libmachine: (custom-flannel-279893)     </disk>
	I1205 21:29:57.242242  345195 main.go:141] libmachine: (custom-flannel-279893)     <interface type='network'>
	I1205 21:29:57.242252  345195 main.go:141] libmachine: (custom-flannel-279893)       <source network='mk-custom-flannel-279893'/>
	I1205 21:29:57.242266  345195 main.go:141] libmachine: (custom-flannel-279893)       <model type='virtio'/>
	I1205 21:29:57.242277  345195 main.go:141] libmachine: (custom-flannel-279893)     </interface>
	I1205 21:29:57.242294  345195 main.go:141] libmachine: (custom-flannel-279893)     <interface type='network'>
	I1205 21:29:57.242304  345195 main.go:141] libmachine: (custom-flannel-279893)       <source network='default'/>
	I1205 21:29:57.242314  345195 main.go:141] libmachine: (custom-flannel-279893)       <model type='virtio'/>
	I1205 21:29:57.242322  345195 main.go:141] libmachine: (custom-flannel-279893)     </interface>
	I1205 21:29:57.242333  345195 main.go:141] libmachine: (custom-flannel-279893)     <serial type='pty'>
	I1205 21:29:57.242344  345195 main.go:141] libmachine: (custom-flannel-279893)       <target port='0'/>
	I1205 21:29:57.242353  345195 main.go:141] libmachine: (custom-flannel-279893)     </serial>
	I1205 21:29:57.242361  345195 main.go:141] libmachine: (custom-flannel-279893)     <console type='pty'>
	I1205 21:29:57.242373  345195 main.go:141] libmachine: (custom-flannel-279893)       <target type='serial' port='0'/>
	I1205 21:29:57.242380  345195 main.go:141] libmachine: (custom-flannel-279893)     </console>
	I1205 21:29:57.242391  345195 main.go:141] libmachine: (custom-flannel-279893)     <rng model='virtio'>
	I1205 21:29:57.242403  345195 main.go:141] libmachine: (custom-flannel-279893)       <backend model='random'>/dev/random</backend>
	I1205 21:29:57.242415  345195 main.go:141] libmachine: (custom-flannel-279893)     </rng>
	I1205 21:29:57.242423  345195 main.go:141] libmachine: (custom-flannel-279893)     
	I1205 21:29:57.242433  345195 main.go:141] libmachine: (custom-flannel-279893)     
	I1205 21:29:57.242441  345195 main.go:141] libmachine: (custom-flannel-279893)   </devices>
	I1205 21:29:57.242451  345195 main.go:141] libmachine: (custom-flannel-279893) </domain>
	I1205 21:29:57.242464  345195 main.go:141] libmachine: (custom-flannel-279893) 
	I1205 21:29:57.246985  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:50:cb:63 in network default
	I1205 21:29:57.247876  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:29:57.247900  345195 main.go:141] libmachine: (custom-flannel-279893) Ensuring networks are active...
	I1205 21:29:57.248940  345195 main.go:141] libmachine: (custom-flannel-279893) Ensuring network default is active
	I1205 21:29:57.249309  345195 main.go:141] libmachine: (custom-flannel-279893) Ensuring network mk-custom-flannel-279893 is active
	I1205 21:29:57.250039  345195 main.go:141] libmachine: (custom-flannel-279893) Getting domain xml...
	I1205 21:29:57.250983  345195 main.go:141] libmachine: (custom-flannel-279893) Creating domain...
	I1205 21:29:58.710374  345195 main.go:141] libmachine: (custom-flannel-279893) Waiting to get IP...
	I1205 21:29:58.711678  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:29:58.712238  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:29:58.712263  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:58.712200  346758 retry.go:31] will retry after 286.751ms: waiting for machine to come up
	I1205 21:29:59.001207  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:29:59.001932  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:29:59.001964  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:59.001884  346758 retry.go:31] will retry after 313.264434ms: waiting for machine to come up
	I1205 21:29:59.316687  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:29:59.317313  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:29:59.317345  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:59.317237  346758 retry.go:31] will retry after 334.775211ms: waiting for machine to come up
	I1205 21:29:59.653819  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:29:59.654367  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:29:59.654400  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:29:59.654301  346758 retry.go:31] will retry after 434.67235ms: waiting for machine to come up
	I1205 21:29:59.588589  344606 crio.go:462] duration metric: took 1.430838593s to copy over tarball
	I1205 21:29:59.588666  344606 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:30:01.995846  344606 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.407144601s)
	I1205 21:30:01.995885  344606 crio.go:469] duration metric: took 2.407260485s to extract the tarball
	I1205 21:30:01.995894  344606 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:30:02.035019  344606 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:30:02.085283  344606 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:30:02.085312  344606 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:30:02.085321  344606 kubeadm.go:934] updating node { 192.168.39.206 8443 v1.31.2 crio true true} ...
	I1205 21:30:02.085462  344606 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-279893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.206
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1205 21:30:02.085599  344606 ssh_runner.go:195] Run: crio config
	I1205 21:30:02.132118  344606 cni.go:84] Creating CNI manager for "calico"
	I1205 21:30:02.132149  344606 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:30:02.132179  344606 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.206 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-279893 NodeName:calico-279893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.206"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.206 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:30:02.132333  344606 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.206
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-279893"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.206"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.206"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:30:02.132408  344606 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:30:02.144575  344606 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:30:02.144661  344606 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:30:02.155982  344606 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I1205 21:30:02.177479  344606 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:30:02.197646  344606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2293 bytes)
	I1205 21:30:02.216591  344606 ssh_runner.go:195] Run: grep 192.168.39.206	control-plane.minikube.internal$ /etc/hosts
	I1205 21:30:02.220895  344606 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.206	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:30:02.234621  344606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:02.399209  344606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:30:02.421044  344606 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893 for IP: 192.168.39.206
	I1205 21:30:02.421073  344606 certs.go:194] generating shared ca certs ...
	I1205 21:30:02.421097  344606 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:02.421323  344606 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:30:02.421415  344606 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:30:02.421432  344606 certs.go:256] generating profile certs ...
	I1205 21:30:02.421535  344606 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.key
	I1205 21:30:02.421563  344606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt with IP's: []
	I1205 21:30:02.560796  344606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt ...
	I1205 21:30:02.560836  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: {Name:mk3d3ef97837f3af2f950df8f151724271915a50 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:02.561059  344606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.key ...
	I1205 21:30:02.561073  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.key: {Name:mk3feaa9cd61fce9be37ef28422fc9d372662d28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:02.561218  344606 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key.048993b0
	I1205 21:30:02.561235  344606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt.048993b0 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.206]
	I1205 21:30:02.774908  344606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt.048993b0 ...
	I1205 21:30:02.774945  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt.048993b0: {Name:mka1034a5d8e5a9c54d3b46a4341aa4c940d000a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:02.775131  344606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key.048993b0 ...
	I1205 21:30:02.775148  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key.048993b0: {Name:mk8ee0e32b2513acf62ce5151ef4328b9146efd7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:02.775215  344606 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt.048993b0 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt
	I1205 21:30:02.775287  344606 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key.048993b0 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key
	I1205 21:30:02.775350  344606 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.key
	I1205 21:30:02.775364  344606 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.crt with IP's: []
	I1205 21:30:03.052748  344606 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.crt ...
	I1205 21:30:03.052784  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.crt: {Name:mk2b19b18796f811af3dd6efdbfb73e6d4fbd327 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:03.052976  344606 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.key ...
	I1205 21:30:03.052996  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.key: {Name:mke7d42b046415e03d0abd386662e4a99d32ac5d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:03.053184  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:30:03.053226  344606 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:30:03.053236  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:30:03.053262  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:30:03.053285  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:30:03.053309  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:30:03.053345  344606 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:30:03.054050  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:30:03.091287  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:30:03.119705  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:30:03.146648  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:30:03.176784  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 21:30:03.203568  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:30:03.229872  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:30:03.257726  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:30:03.287587  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:30:03.314263  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:30:03.341248  344606 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:30:03.372938  344606 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:30:03.391487  344606 ssh_runner.go:195] Run: openssl version
	I1205 21:30:03.397943  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:30:03.410566  344606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:03.415610  344606 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:03.415681  344606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:03.422322  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:30:03.434788  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:30:03.447012  344606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:30:03.452195  344606 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:30:03.452300  344606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:30:03.459105  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:30:03.473148  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:30:03.487231  344606 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:30:03.492412  344606 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:30:03.492511  344606 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:30:03.498931  344606 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:30:03.513994  344606 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:30:03.518540  344606 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 21:30:03.518605  344606 kubeadm.go:392] StartCluster: {Name:calico-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 C
lusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:30:03.518717  344606 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:30:03.518823  344606 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:30:03.557145  344606 cri.go:89] found id: ""
	I1205 21:30:03.557222  344606 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:30:03.568576  344606 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:30:03.581539  344606 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:30:03.592648  344606 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:30:03.592679  344606 kubeadm.go:157] found existing configuration files:
	
	I1205 21:30:03.592741  344606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:30:03.603534  344606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:30:03.603606  344606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:30:03.614870  344606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:30:03.625808  344606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:30:03.625877  344606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:30:03.637113  344606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:30:03.648240  344606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:30:03.648332  344606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:30:03.659436  344606 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:30:03.670131  344606 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:30:03.670202  344606 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:30:03.681470  344606 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:30:03.738295  344606 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:30:03.738427  344606 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:30:03.870602  344606 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:30:03.870782  344606 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:30:03.870950  344606 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:30:03.880126  344606 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:30:00.091178  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:00.091744  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:00.091777  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:00.091691  346758 retry.go:31] will retry after 550.029068ms: waiting for machine to come up
	I1205 21:30:00.643508  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:00.644086  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:00.644121  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:00.644021  346758 retry.go:31] will retry after 917.998799ms: waiting for machine to come up
	I1205 21:30:01.563576  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:01.564175  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:01.564198  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:01.564162  346758 retry.go:31] will retry after 755.280181ms: waiting for machine to come up
	I1205 21:30:02.320631  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:02.321240  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:02.321267  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:02.321179  346758 retry.go:31] will retry after 1.461207824s: waiting for machine to come up
	I1205 21:30:03.784878  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:03.785598  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:03.785633  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:03.785520  346758 retry.go:31] will retry after 1.655881992s: waiting for machine to come up
	I1205 21:30:03.954519  344606 out.go:235]   - Generating certificates and keys ...
	I1205 21:30:03.954659  344606 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:30:03.954750  344606 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:30:04.127424  344606 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 21:30:04.331517  344606 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 21:30:04.467122  344606 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 21:30:04.623226  344606 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 21:30:04.743804  344606 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 21:30:04.743994  344606 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-279893 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1205 21:30:04.879035  344606 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 21:30:04.879244  344606 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-279893 localhost] and IPs [192.168.39.206 127.0.0.1 ::1]
	I1205 21:30:04.979948  344606 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 21:30:05.283920  344606 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 21:30:05.464379  344606 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 21:30:05.464512  344606 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:30:05.667336  344606 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:30:05.747793  344606 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:30:05.873059  344606 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:30:06.031614  344606 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:30:06.125282  344606 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:30:06.125840  344606 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:30:06.128710  344606 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:30:06.130744  344606 out.go:235]   - Booting up control plane ...
	I1205 21:30:06.130890  344606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:30:06.130991  344606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:30:06.131088  344606 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:30:06.147523  344606 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:30:06.154713  344606 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:30:06.154784  344606 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:30:06.285570  344606 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:30:06.285743  344606 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:30:07.288356  344606 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.003337693s
	I1205 21:30:07.288493  344606 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:30:05.443535  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:05.444128  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:05.444160  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:05.444083  346758 retry.go:31] will retry after 1.752100472s: waiting for machine to come up
	I1205 21:30:07.197866  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:07.198372  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:07.198403  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:07.198305  346758 retry.go:31] will retry after 2.698179339s: waiting for machine to come up
	I1205 21:30:09.899074  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:09.899629  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:09.899661  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:09.899540  346758 retry.go:31] will retry after 3.098297444s: waiting for machine to come up
	I1205 21:30:12.287025  344606 kubeadm.go:310] [api-check] The API server is healthy after 5.002643365s
	I1205 21:30:12.306669  344606 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:30:12.347061  344606 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:30:12.413373  344606 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:30:12.413700  344606 kubeadm.go:310] [mark-control-plane] Marking the node calico-279893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:30:12.433145  344606 kubeadm.go:310] [bootstrap-token] Using token: 17054l.ccfyvamev9x067ya
	I1205 21:30:12.434802  344606 out.go:235]   - Configuring RBAC rules ...
	I1205 21:30:12.434958  344606 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:30:12.442752  344606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:30:12.457313  344606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:30:12.462136  344606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:30:12.466679  344606 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:30:12.471782  344606 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:30:12.695018  344606 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:30:13.125896  344606 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:30:13.696072  344606 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:30:13.697578  344606 kubeadm.go:310] 
	I1205 21:30:13.697645  344606 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:30:13.697654  344606 kubeadm.go:310] 
	I1205 21:30:13.697796  344606 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:30:13.697823  344606 kubeadm.go:310] 
	I1205 21:30:13.697874  344606 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:30:13.697970  344606 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:30:13.698050  344606 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:30:13.698061  344606 kubeadm.go:310] 
	I1205 21:30:13.698144  344606 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:30:13.698154  344606 kubeadm.go:310] 
	I1205 21:30:13.698217  344606 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:30:13.698234  344606 kubeadm.go:310] 
	I1205 21:30:13.698321  344606 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:30:13.698427  344606 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:30:13.698519  344606 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:30:13.698533  344606 kubeadm.go:310] 
	I1205 21:30:13.698639  344606 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:30:13.698740  344606 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:30:13.698751  344606 kubeadm.go:310] 
	I1205 21:30:13.698849  344606 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 17054l.ccfyvamev9x067ya \
	I1205 21:30:13.699029  344606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:30:13.699078  344606 kubeadm.go:310] 	--control-plane 
	I1205 21:30:13.699087  344606 kubeadm.go:310] 
	I1205 21:30:13.699189  344606 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:30:13.699200  344606 kubeadm.go:310] 
	I1205 21:30:13.699304  344606 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 17054l.ccfyvamev9x067ya \
	I1205 21:30:13.699461  344606 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:30:13.700441  344606 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:30:13.700466  344606 cni.go:84] Creating CNI manager for "calico"
	I1205 21:30:13.701959  344606 out.go:177] * Configuring Calico (Container Networking Interface) ...
	I1205 21:30:12.999849  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:13.000593  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:13.000625  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:13.000520  346758 retry.go:31] will retry after 3.716307549s: waiting for machine to come up
	I1205 21:30:13.703397  344606 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 21:30:13.703420  344606 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (323065 bytes)
	I1205 21:30:13.728596  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 21:30:15.296592  344606 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.567949114s)
	I1205 21:30:15.296662  344606 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:30:15.296766  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:15.296772  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-279893 minikube.k8s.io/updated_at=2024_12_05T21_30_15_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=calico-279893 minikube.k8s.io/primary=true
	I1205 21:30:15.422087  344606 ops.go:34] apiserver oom_adj: -16
	I1205 21:30:15.422160  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:15.922873  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:16.423293  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:16.922395  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:17.423163  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:17.922651  344606 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:18.049085  344606 kubeadm.go:1113] duration metric: took 2.752399999s to wait for elevateKubeSystemPrivileges
	I1205 21:30:18.049140  344606 kubeadm.go:394] duration metric: took 14.530540524s to StartCluster
	I1205 21:30:18.049169  344606 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:18.049275  344606 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:30:18.050645  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:18.050942  344606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 21:30:18.050957  344606 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.39.206 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:30:18.051047  344606 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:30:18.051156  344606 addons.go:69] Setting storage-provisioner=true in profile "calico-279893"
	I1205 21:30:18.051179  344606 addons.go:234] Setting addon storage-provisioner=true in "calico-279893"
	I1205 21:30:18.051188  344606 addons.go:69] Setting default-storageclass=true in profile "calico-279893"
	I1205 21:30:18.051204  344606 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-279893"
	I1205 21:30:18.051245  344606 config.go:182] Loaded profile config "calico-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:30:18.051221  344606 host.go:66] Checking if "calico-279893" exists ...
	I1205 21:30:18.051825  344606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:18.051877  344606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:18.052190  344606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:18.052237  344606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:18.052323  344606 out.go:177] * Verifying Kubernetes components...
	I1205 21:30:18.054010  344606 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:18.069456  344606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39537
	I1205 21:30:18.069456  344606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35795
	I1205 21:30:18.070062  344606 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:18.070173  344606 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:18.070651  344606 main.go:141] libmachine: Using API Version  1
	I1205 21:30:18.070674  344606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:18.070790  344606 main.go:141] libmachine: Using API Version  1
	I1205 21:30:18.070815  344606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:18.070989  344606 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:18.071214  344606 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:18.071425  344606 main.go:141] libmachine: (calico-279893) Calling .GetState
	I1205 21:30:18.071595  344606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:18.071634  344606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:18.075246  344606 addons.go:234] Setting addon default-storageclass=true in "calico-279893"
	I1205 21:30:18.075302  344606 host.go:66] Checking if "calico-279893" exists ...
	I1205 21:30:18.075689  344606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:18.075744  344606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:18.090336  344606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36115
	I1205 21:30:18.090947  344606 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:18.091580  344606 main.go:141] libmachine: Using API Version  1
	I1205 21:30:18.091614  344606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:18.091993  344606 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:18.092229  344606 main.go:141] libmachine: (calico-279893) Calling .GetState
	I1205 21:30:18.093063  344606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I1205 21:30:18.093623  344606 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:18.094203  344606 main.go:141] libmachine: Using API Version  1
	I1205 21:30:18.094232  344606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:18.094337  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:30:18.094712  344606 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:18.095220  344606 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:18.095268  344606 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:18.096162  344606 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:30:18.097677  344606 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:30:18.097699  344606 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:30:18.097725  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:30:18.101303  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:30:18.101772  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:30:18.101784  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:30:18.102009  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:30:18.102218  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:30:18.102407  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:30:18.102535  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:30:18.112376  344606 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40571
	I1205 21:30:18.112920  344606 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:18.113528  344606 main.go:141] libmachine: Using API Version  1
	I1205 21:30:18.113553  344606 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:18.113894  344606 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:18.114146  344606 main.go:141] libmachine: (calico-279893) Calling .GetState
	I1205 21:30:18.115738  344606 main.go:141] libmachine: (calico-279893) Calling .DriverName
	I1205 21:30:18.115960  344606 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:30:18.115981  344606 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:30:18.116004  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHHostname
	I1205 21:30:18.118602  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:30:18.118995  344606 main.go:141] libmachine: (calico-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:a8:9d:0f", ip: ""} in network mk-calico-279893: {Iface:virbr1 ExpiryTime:2024-12-05 22:29:47 +0000 UTC Type:0 Mac:52:54:00:a8:9d:0f Iaid: IPaddr:192.168.39.206 Prefix:24 Hostname:calico-279893 Clientid:01:52:54:00:a8:9d:0f}
	I1205 21:30:18.119024  344606 main.go:141] libmachine: (calico-279893) DBG | domain calico-279893 has defined IP address 192.168.39.206 and MAC address 52:54:00:a8:9d:0f in network mk-calico-279893
	I1205 21:30:18.119121  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHPort
	I1205 21:30:18.119282  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHKeyPath
	I1205 21:30:18.119424  344606 main.go:141] libmachine: (calico-279893) Calling .GetSSHUsername
	I1205 21:30:18.119560  344606 sshutil.go:53] new ssh client: &{IP:192.168.39.206 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/calico-279893/id_rsa Username:docker}
	I1205 21:30:18.414970  344606 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:30:18.415026  344606 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 21:30:18.419389  344606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:30:18.439310  344606 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:30:18.948539  344606 start.go:971] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I1205 21:30:18.948670  344606 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:18.948697  344606 main.go:141] libmachine: (calico-279893) Calling .Close
	I1205 21:30:18.949082  344606 main.go:141] libmachine: (calico-279893) DBG | Closing plugin on server side
	I1205 21:30:18.949175  344606 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:18.949199  344606 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:18.949214  344606 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:18.949222  344606 main.go:141] libmachine: (calico-279893) Calling .Close
	I1205 21:30:18.949533  344606 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:18.949555  344606 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:18.949890  344606 node_ready.go:35] waiting up to 15m0s for node "calico-279893" to be "Ready" ...
	I1205 21:30:18.967063  344606 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:18.967097  344606 main.go:141] libmachine: (calico-279893) Calling .Close
	I1205 21:30:18.967418  344606 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:18.967438  344606 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:19.232719  344606 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:19.232742  344606 main.go:141] libmachine: (calico-279893) Calling .Close
	I1205 21:30:19.233119  344606 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:19.233141  344606 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:19.233157  344606 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:19.233165  344606 main.go:141] libmachine: (calico-279893) Calling .Close
	I1205 21:30:19.233436  344606 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:19.233440  344606 main.go:141] libmachine: (calico-279893) DBG | Closing plugin on server side
	I1205 21:30:19.233455  344606 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:19.235017  344606 out.go:177] * Enabled addons: default-storageclass, storage-provisioner
	I1205 21:30:16.717944  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:16.718426  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find current IP address of domain custom-flannel-279893 in network mk-custom-flannel-279893
	I1205 21:30:16.718458  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | I1205 21:30:16.718379  346758 retry.go:31] will retry after 3.511991715s: waiting for machine to come up
	I1205 21:30:19.236347  344606 addons.go:510] duration metric: took 1.185304468s for enable addons: enabled=[default-storageclass storage-provisioner]
	I1205 21:30:19.454497  344606 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-279893" context rescaled to 1 replicas
	I1205 21:30:20.954212  344606 node_ready.go:53] node "calico-279893" has status "Ready":"False"
	I1205 21:30:20.234294  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:20.234849  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has current primary IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:20.234874  345195 main.go:141] libmachine: (custom-flannel-279893) Found IP for machine: 192.168.61.54
	I1205 21:30:20.234884  345195 main.go:141] libmachine: (custom-flannel-279893) Reserving static IP address...
	I1205 21:30:20.235246  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find host DHCP lease matching {name: "custom-flannel-279893", mac: "52:54:00:e1:94:85", ip: "192.168.61.54"} in network mk-custom-flannel-279893
	I1205 21:30:20.324657  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Getting to WaitForSSH function...
	I1205 21:30:20.324684  345195 main.go:141] libmachine: (custom-flannel-279893) Reserved static IP address: 192.168.61.54
	I1205 21:30:20.324697  345195 main.go:141] libmachine: (custom-flannel-279893) Waiting for SSH to be available...
	I1205 21:30:20.327804  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:20.328092  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893
	I1205 21:30:20.328123  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | unable to find defined IP address of network mk-custom-flannel-279893 interface with MAC address 52:54:00:e1:94:85
	I1205 21:30:20.328267  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Using SSH client type: external
	I1205 21:30:20.328298  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa (-rw-------)
	I1205 21:30:20.328323  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:30:20.328341  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | About to run SSH command:
	I1205 21:30:20.328351  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | exit 0
	I1205 21:30:20.332343  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | SSH cmd err, output: exit status 255: 
	I1205 21:30:20.332386  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I1205 21:30:20.332439  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | command : exit 0
	I1205 21:30:20.332460  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | err     : exit status 255
	I1205 21:30:20.332470  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | output  : 
	I1205 21:30:23.333932  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Getting to WaitForSSH function...
	I1205 21:30:23.336900  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.337400  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.337433  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.337576  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Using SSH client type: external
	I1205 21:30:23.337608  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa (-rw-------)
	I1205 21:30:23.337636  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.54 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:30:23.337646  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | About to run SSH command:
	I1205 21:30:23.337655  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | exit 0
	I1205 21:30:23.474837  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | SSH cmd err, output: <nil>: 
	I1205 21:30:23.475178  345195 main.go:141] libmachine: (custom-flannel-279893) KVM machine creation complete!
	I1205 21:30:23.475502  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetConfigRaw
	I1205 21:30:23.476158  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:23.476438  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:23.476645  345195 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 21:30:23.476658  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetState
	I1205 21:30:23.478086  345195 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 21:30:23.478105  345195 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 21:30:23.478113  345195 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 21:30:23.478121  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:23.480586  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.480995  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.481035  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.481273  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:23.481472  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.481650  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.481821  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:23.482024  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:23.482272  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:23.482287  345195 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 21:30:23.601553  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:30:23.601594  345195 main.go:141] libmachine: Detecting the provisioner...
	I1205 21:30:23.601607  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:23.604639  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.605056  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.605092  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.605250  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:23.605487  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.605763  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.605943  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:23.606103  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:23.606297  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:23.606308  345195 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 21:30:23.718844  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 21:30:23.718927  345195 main.go:141] libmachine: found compatible host: buildroot
	I1205 21:30:23.718943  345195 main.go:141] libmachine: Provisioning with buildroot...
	I1205 21:30:23.718957  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetMachineName
	I1205 21:30:23.719243  345195 buildroot.go:166] provisioning hostname "custom-flannel-279893"
	I1205 21:30:23.719264  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetMachineName
	I1205 21:30:23.719492  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:23.722271  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.722759  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.722792  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.723021  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:23.723252  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.723454  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.723635  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:23.723833  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:23.724025  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:23.724039  345195 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-279893 && echo "custom-flannel-279893" | sudo tee /etc/hostname
	I1205 21:30:23.854736  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-279893
	
	I1205 21:30:23.854775  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:23.857982  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.858424  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.858463  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.858628  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:23.858892  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.859100  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:23.859240  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:23.859442  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:23.859620  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:23.859636  345195 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-279893' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-279893/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-279893' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:30:23.983383  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:30:23.983431  345195 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:30:23.983494  345195 buildroot.go:174] setting up certificates
	I1205 21:30:23.983511  345195 provision.go:84] configureAuth start
	I1205 21:30:23.983529  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetMachineName
	I1205 21:30:23.983884  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetIP
	I1205 21:30:23.987040  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.987493  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.987526  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.987739  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:23.990477  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.990896  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:23.990931  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:23.991105  345195 provision.go:143] copyHostCerts
	I1205 21:30:23.991183  345195 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:30:23.991211  345195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:30:23.991284  345195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:30:23.991412  345195 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:30:23.991425  345195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:30:23.991460  345195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:30:23.991545  345195 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:30:23.991554  345195 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:30:23.991581  345195 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:30:23.991655  345195 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-279893 san=[127.0.0.1 192.168.61.54 custom-flannel-279893 localhost minikube]
	I1205 21:30:24.364255  345195 provision.go:177] copyRemoteCerts
	I1205 21:30:24.364366  345195 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:30:24.364406  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:24.367863  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.368236  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:24.368275  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.368437  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:24.368662  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:24.368787  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:24.368894  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:24.457422  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:30:24.484639  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1205 21:30:24.510668  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:30:24.538188  345195 provision.go:87] duration metric: took 554.660211ms to configureAuth
	I1205 21:30:24.538221  345195 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:30:24.538446  345195 config.go:182] Loaded profile config "custom-flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:30:24.538533  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:24.541716  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.542308  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:24.542340  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.542530  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:24.542841  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:24.543068  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:24.543245  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:24.543436  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:24.543681  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:24.543703  345195 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:30:24.921456  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:30:24.921496  345195 main.go:141] libmachine: Checking connection to Docker...
	I1205 21:30:24.921510  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetURL
	I1205 21:30:24.923201  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Using libvirt version 6000000
	I1205 21:30:24.925875  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.926302  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:24.926335  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.926591  345195 main.go:141] libmachine: Docker is up and running!
	I1205 21:30:24.926628  345195 main.go:141] libmachine: Reticulating splines...
	I1205 21:30:24.926648  345195 client.go:171] duration metric: took 28.273528905s to LocalClient.Create
	I1205 21:30:24.926680  345195 start.go:167] duration metric: took 28.273605699s to libmachine.API.Create "custom-flannel-279893"
	I1205 21:30:24.926695  345195 start.go:293] postStartSetup for "custom-flannel-279893" (driver="kvm2")
	I1205 21:30:24.926710  345195 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:30:24.926740  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:24.927420  345195 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:30:24.927474  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:25.195008  346445 start.go:364] duration metric: took 37.798068893s to acquireMachinesLock for "kubernetes-upgrade-055769"
	I1205 21:30:25.195077  346445 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:30:25.195086  346445 fix.go:54] fixHost starting: 
	I1205 21:30:25.195508  346445 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:25.195561  346445 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:25.213692  346445 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44167
	I1205 21:30:25.214288  346445 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:25.214890  346445 main.go:141] libmachine: Using API Version  1
	I1205 21:30:25.214916  346445 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:25.215272  346445 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:25.215499  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:25.215671  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetState
	I1205 21:30:25.217446  346445 fix.go:112] recreateIfNeeded on kubernetes-upgrade-055769: state=Running err=<nil>
	W1205 21:30:25.217470  346445 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:30:25.219189  346445 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-055769" VM ...
	I1205 21:30:24.931123  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.931552  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:24.931585  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:24.931844  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:24.932095  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:24.932342  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:24.932531  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:25.022908  345195 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:30:25.027540  345195 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:30:25.027578  345195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:30:25.027671  345195 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:30:25.027797  345195 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:30:25.027906  345195 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:30:25.039232  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:30:25.064982  345195 start.go:296] duration metric: took 138.267157ms for postStartSetup
	I1205 21:30:25.065073  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetConfigRaw
	I1205 21:30:25.065834  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetIP
	I1205 21:30:25.069085  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.069493  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:25.069533  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.069748  345195 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/config.json ...
	I1205 21:30:25.069997  345195 start.go:128] duration metric: took 28.439128504s to createHost
	I1205 21:30:25.070025  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:25.072588  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.073004  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:25.073039  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.073240  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:25.073483  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:25.073661  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:25.073805  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:25.073980  345195 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:25.074157  345195 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.54 22 <nil> <nil>}
	I1205 21:30:25.074168  345195 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:30:25.194829  345195 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434225.142237043
	
	I1205 21:30:25.194861  345195 fix.go:216] guest clock: 1733434225.142237043
	I1205 21:30:25.194873  345195 fix.go:229] Guest: 2024-12-05 21:30:25.142237043 +0000 UTC Remote: 2024-12-05 21:30:25.070012416 +0000 UTC m=+50.201857414 (delta=72.224627ms)
	I1205 21:30:25.194902  345195 fix.go:200] guest clock delta is within tolerance: 72.224627ms
	I1205 21:30:25.194910  345195 start.go:83] releasing machines lock for "custom-flannel-279893", held for 28.564264128s
	I1205 21:30:25.194947  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:25.195261  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetIP
	I1205 21:30:25.198377  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.198775  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:25.198805  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.198985  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:25.199584  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:25.199808  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:25.199923  345195 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:30:25.200000  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:25.200075  345195 ssh_runner.go:195] Run: cat /version.json
	I1205 21:30:25.200109  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:25.203022  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.203271  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.203400  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:25.203428  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.203628  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:25.203681  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:25.203709  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:25.203921  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:25.203937  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:25.204100  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:25.204110  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:25.204302  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:25.204304  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:25.204457  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:25.310671  345195 ssh_runner.go:195] Run: systemctl --version
	I1205 21:30:25.317035  345195 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:30:25.487073  345195 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:30:25.493482  345195 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:30:25.493570  345195 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:30:25.512932  345195 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:30:25.512980  345195 start.go:495] detecting cgroup driver to use...
	I1205 21:30:25.513057  345195 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:30:25.530425  345195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:30:25.546836  345195 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:30:25.546909  345195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:30:25.562441  345195 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:30:25.589815  345195 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:30:25.716717  345195 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:30:25.901277  345195 docker.go:233] disabling docker service ...
	I1205 21:30:25.901361  345195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:30:25.917478  345195 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:30:25.934259  345195 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:30:26.098523  345195 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:30:26.232116  345195 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:30:26.246006  345195 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:30:26.266460  345195 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:30:26.266550  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.279123  345195 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:30:26.279211  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.292204  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.304621  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.318852  345195 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:30:26.330593  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.342453  345195 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.361485  345195 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:26.373506  345195 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:30:26.383697  345195 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:30:26.383784  345195 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:30:26.398223  345195 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:30:26.408730  345195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:26.533070  345195 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:30:26.633786  345195 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:30:26.633883  345195 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:30:26.639674  345195 start.go:563] Will wait 60s for crictl version
	I1205 21:30:26.639749  345195 ssh_runner.go:195] Run: which crictl
	I1205 21:30:26.644079  345195 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:30:26.687750  345195 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:30:26.687869  345195 ssh_runner.go:195] Run: crio --version
	I1205 21:30:26.717159  345195 ssh_runner.go:195] Run: crio --version
	I1205 21:30:26.749152  345195 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:30:25.220502  346445 machine.go:93] provisionDockerMachine start ...
	I1205 21:30:25.220538  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:25.220811  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.224128  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.224644  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.224682  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.224879  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:25.225078  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.225309  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.225447  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:25.225597  346445 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:25.225868  346445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:30:25.225884  346445 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:30:25.335259  346445 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-055769
	
	I1205 21:30:25.335297  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:30:25.335594  346445 buildroot.go:166] provisioning hostname "kubernetes-upgrade-055769"
	I1205 21:30:25.335633  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:30:25.335880  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.339595  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.340160  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.340188  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.340423  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:25.340668  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.340908  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.341090  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:25.341330  346445 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:25.341551  346445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:30:25.341563  346445 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-055769 && echo "kubernetes-upgrade-055769" | sudo tee /etc/hostname
	I1205 21:30:25.466055  346445 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-055769
	
	I1205 21:30:25.466103  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.469112  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.469503  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.469545  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.469709  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:25.469939  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.470204  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.470382  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:25.470569  346445 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:25.470766  346445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:30:25.470783  346445 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-055769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-055769/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-055769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:30:25.579268  346445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:30:25.579307  346445 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:30:25.579329  346445 buildroot.go:174] setting up certificates
	I1205 21:30:25.579350  346445 provision.go:84] configureAuth start
	I1205 21:30:25.579364  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:30:25.579695  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:30:25.582680  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.583059  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.583083  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.583239  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.585711  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.586086  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.586136  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.586387  346445 provision.go:143] copyHostCerts
	I1205 21:30:25.586442  346445 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:30:25.586464  346445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:30:25.586521  346445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:30:25.586620  346445 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:30:25.586630  346445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:30:25.586650  346445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:30:25.586763  346445 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:30:25.586774  346445 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:30:25.586793  346445 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:30:25.586845  346445 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-055769 san=[127.0.0.1 192.168.50.100 kubernetes-upgrade-055769 localhost minikube]
	I1205 21:30:25.695815  346445 provision.go:177] copyRemoteCerts
	I1205 21:30:25.695884  346445 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:30:25.695913  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.699091  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.699425  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.699457  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.699703  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:25.699955  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.700115  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:25.700287  346445 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:30:25.793600  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 21:30:25.826921  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:30:25.857092  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:30:25.883158  346445 provision.go:87] duration metric: took 303.788938ms to configureAuth
	I1205 21:30:25.883198  346445 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:30:25.883418  346445 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:30:25.883505  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:25.886199  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.886579  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:25.886605  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:25.886928  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:25.887087  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.887277  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:25.887406  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:25.887564  346445 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:25.887792  346445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:30:25.887813  346445 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:30:23.454132  344606 node_ready.go:53] node "calico-279893" has status "Ready":"False"
	I1205 21:30:25.454729  344606 node_ready.go:53] node "calico-279893" has status "Ready":"False"
	I1205 21:30:27.455643  344606 node_ready.go:49] node "calico-279893" has status "Ready":"True"
	I1205 21:30:27.455684  344606 node_ready.go:38] duration metric: took 8.505743406s for node "calico-279893" to be "Ready" ...
	I1205 21:30:27.455699  344606 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:30:27.472267  344606 pod_ready.go:79] waiting up to 15m0s for pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:26.750455  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetIP
	I1205 21:30:26.753302  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:26.753655  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:26.753676  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:26.754023  345195 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:30:26.758200  345195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:30:26.771174  345195 kubeadm.go:883] updating cluster {Name:custom-flannel-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.31.2 ClusterName:custom-flannel-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:30:26.771352  345195 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:30:26.771429  345195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:30:26.805446  345195 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:30:26.805536  345195 ssh_runner.go:195] Run: which lz4
	I1205 21:30:26.809671  345195 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:30:26.814314  345195 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:30:26.814354  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:30:28.235608  345195 crio.go:462] duration metric: took 1.42596866s to copy over tarball
	I1205 21:30:28.235748  345195 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:30:30.707023  345195 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.471227316s)
	I1205 21:30:30.707066  345195 crio.go:469] duration metric: took 2.47141865s to extract the tarball
	I1205 21:30:30.707078  345195 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:30:30.744223  345195 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:30:30.792829  345195 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:30:30.792864  345195 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:30:30.792876  345195 kubeadm.go:934] updating node { 192.168.61.54 8443 v1.31.2 crio true true} ...
	I1205 21:30:30.793019  345195 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=custom-flannel-279893 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.54
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:custom-flannel-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1205 21:30:30.793138  345195 ssh_runner.go:195] Run: crio config
	I1205 21:30:30.852757  345195 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1205 21:30:30.852801  345195 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:30:30.852832  345195 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.54 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-279893 NodeName:custom-flannel-279893 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.54"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.54 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:30:30.852956  345195 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.54
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-279893"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.54"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.54"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:30:30.853024  345195 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:30:30.863959  345195 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:30:30.864041  345195 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:30:30.874509  345195 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I1205 21:30:30.895245  345195 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:30:30.915819  345195 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2298 bytes)
	I1205 21:30:30.938003  345195 ssh_runner.go:195] Run: grep 192.168.61.54	control-plane.minikube.internal$ /etc/hosts
	I1205 21:30:30.943395  345195 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.54	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:30:30.958777  345195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:31.095460  345195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:30:31.118503  345195 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893 for IP: 192.168.61.54
	I1205 21:30:31.118548  345195 certs.go:194] generating shared ca certs ...
	I1205 21:30:31.118571  345195 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.118799  345195 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:30:31.118870  345195 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:30:31.118885  345195 certs.go:256] generating profile certs ...
	I1205 21:30:31.118971  345195 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.key
	I1205 21:30:31.118995  345195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt with IP's: []
	I1205 21:30:31.205576  345195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt ...
	I1205 21:30:31.205612  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: {Name:mk18f0763b758711c57100eec8f4ea4f9d7bd710 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.214854  345195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.key ...
	I1205 21:30:31.214911  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.key: {Name:mk6184f99d443f525c2479c7817c83784b539d54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.215126  345195 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key.4b22e741
	I1205 21:30:31.215160  345195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt.4b22e741 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.54]
	I1205 21:30:31.384137  345195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt.4b22e741 ...
	I1205 21:30:31.384178  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt.4b22e741: {Name:mk8780a7ff26a256b38b895022a64473d10cf0c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.384392  345195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key.4b22e741 ...
	I1205 21:30:31.384416  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key.4b22e741: {Name:mkfe4302554d8c254ebc7c7485f2563e38007b4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.384533  345195 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt.4b22e741 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt
	I1205 21:30:31.384659  345195 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key.4b22e741 -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key
	I1205 21:30:31.384757  345195 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.key
	I1205 21:30:31.384783  345195 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.crt with IP's: []
	I1205 21:30:31.603408  345195 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.crt ...
	I1205 21:30:31.603446  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.crt: {Name:mk8ee444ac6c283e4d31d522dadc4022a5285624 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.626648  345195 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.key ...
	I1205 21:30:31.626706  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.key: {Name:mke81cfbc6db896b4e6f32de29024b669efd2182 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:31.627037  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:30:31.627111  345195 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:30:31.627131  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:30:31.627163  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:30:31.627198  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:30:31.627227  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:30:31.627297  345195 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:30:31.628147  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:30:31.664124  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:30:31.692898  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:30:31.719844  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:30:31.746293  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:30:31.798295  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:30:31.839295  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:30:31.869029  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:30:31.929169  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:30:31.958871  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:30:31.995716  345195 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:30:32.021140  345195 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:30:32.039338  345195 ssh_runner.go:195] Run: openssl version
	I1205 21:30:32.046079  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:30:32.057798  345195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:32.064041  345195 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:32.064121  345195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:32.070311  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:30:32.082107  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:30:32.094744  345195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:30:32.099630  345195 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:30:32.099719  345195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:30:32.105988  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:30:32.117729  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:30:32.129425  345195 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:30:32.134331  345195 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:30:32.134416  345195 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:30:32.140219  345195 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:30:32.151806  345195 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:30:32.156487  345195 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 21:30:32.156562  345195 kubeadm.go:392] StartCluster: {Name:custom-flannel-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.31.2 ClusterName:custom-flannel-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:30:32.156666  345195 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:30:32.156762  345195 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:30:32.195375  345195 cri.go:89] found id: ""
	I1205 21:30:32.195461  345195 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:30:32.205856  345195 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:30:32.215900  345195 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:30:32.226092  345195 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:30:32.226114  345195 kubeadm.go:157] found existing configuration files:
	
	I1205 21:30:32.226173  345195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:30:32.236196  345195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:30:32.236266  345195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:30:32.246067  345195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:30:32.256652  345195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:30:32.256740  345195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:30:32.269218  345195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:30:32.281326  345195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:30:32.281409  345195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:30:32.293020  345195 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:30:32.304429  345195 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:30:32.304500  345195 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:30:32.316447  345195 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:30:32.377006  345195 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:30:32.377089  345195 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:30:32.491301  345195 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:30:32.491447  345195 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:30:32.491569  345195 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:30:32.500530  345195 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:30:29.482599  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:33.098619  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:34.309603  346676 start.go:364] duration metric: took 44.643184535s to acquireMachinesLock for "enable-default-cni-279893"
	I1205 21:30:34.309686  346676 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.31.2 ClusterName:enable-default-cni-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26214
4 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:30:34.309821  346676 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 21:30:34.312036  346676 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1205 21:30:34.312269  346676 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:34.312315  346676 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:34.333784  346676 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36925
	I1205 21:30:34.334366  346676 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:34.335073  346676 main.go:141] libmachine: Using API Version  1
	I1205 21:30:34.335100  346676 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:34.335578  346676 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:34.335875  346676 main.go:141] libmachine: (enable-default-cni-279893) Calling .GetMachineName
	I1205 21:30:34.336027  346676 main.go:141] libmachine: (enable-default-cni-279893) Calling .DriverName
	I1205 21:30:34.336245  346676 start.go:159] libmachine.API.Create for "enable-default-cni-279893" (driver="kvm2")
	I1205 21:30:34.336270  346676 client.go:168] LocalClient.Create starting
	I1205 21:30:34.336310  346676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 21:30:34.336363  346676 main.go:141] libmachine: Decoding PEM data...
	I1205 21:30:34.336379  346676 main.go:141] libmachine: Parsing certificate...
	I1205 21:30:34.336461  346676 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 21:30:34.336488  346676 main.go:141] libmachine: Decoding PEM data...
	I1205 21:30:34.336499  346676 main.go:141] libmachine: Parsing certificate...
	I1205 21:30:34.336519  346676 main.go:141] libmachine: Running pre-create checks...
	I1205 21:30:34.336530  346676 main.go:141] libmachine: (enable-default-cni-279893) Calling .PreCreateCheck
	I1205 21:30:34.337012  346676 main.go:141] libmachine: (enable-default-cni-279893) Calling .GetConfigRaw
	I1205 21:30:34.337580  346676 main.go:141] libmachine: Creating machine...
	I1205 21:30:34.337600  346676 main.go:141] libmachine: (enable-default-cni-279893) Calling .Create
	I1205 21:30:34.337729  346676 main.go:141] libmachine: (enable-default-cni-279893) Creating KVM machine...
	I1205 21:30:34.339444  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | found existing default KVM network
	I1205 21:30:34.342069  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.341038  347094 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:25:d0:45} reservation:<nil>}
	I1205 21:30:34.342518  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.342414  347094 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:33:7e} reservation:<nil>}
	I1205 21:30:34.344088  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.343955  347094 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:62:58:f4} reservation:<nil>}
	I1205 21:30:34.345692  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.345581  347094 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002a7e80}
	I1205 21:30:34.345855  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | created network xml: 
	I1205 21:30:34.345876  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | <network>
	I1205 21:30:34.345883  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   <name>mk-enable-default-cni-279893</name>
	I1205 21:30:34.345888  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   <dns enable='no'/>
	I1205 21:30:34.345894  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   
	I1205 21:30:34.345914  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I1205 21:30:34.345924  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |     <dhcp>
	I1205 21:30:34.345932  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I1205 21:30:34.345940  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |     </dhcp>
	I1205 21:30:34.345947  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   </ip>
	I1205 21:30:34.345955  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG |   
	I1205 21:30:34.345962  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | </network>
	I1205 21:30:34.345972  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | 
	I1205 21:30:34.351987  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | trying to create private KVM network mk-enable-default-cni-279893 192.168.72.0/24...
	I1205 21:30:34.477641  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | private KVM network mk-enable-default-cni-279893 192.168.72.0/24 created
	I1205 21:30:34.477729  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893 ...
	I1205 21:30:34.477864  346676 main.go:141] libmachine: (enable-default-cni-279893) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 21:30:34.477893  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.477808  347094 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:30:34.478115  346676 main.go:141] libmachine: (enable-default-cni-279893) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 21:30:32.709310  345195 out.go:235]   - Generating certificates and keys ...
	I1205 21:30:32.709440  345195 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:30:32.709513  345195 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:30:32.709639  345195 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 21:30:32.745713  345195 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 21:30:32.979063  345195 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 21:30:33.406500  345195 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 21:30:33.540246  345195 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 21:30:33.540450  345195 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-279893 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	I1205 21:30:33.713636  345195 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 21:30:33.713860  345195 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-279893 localhost] and IPs [192.168.61.54 127.0.0.1 ::1]
	I1205 21:30:33.827683  345195 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 21:30:33.971398  345195 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 21:30:34.064157  345195 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 21:30:34.064271  345195 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:30:34.170847  345195 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:30:34.231319  345195 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:30:34.864811  345195 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:30:35.136628  345195 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:30:35.300421  345195 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:30:35.302028  345195 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:30:35.307830  345195 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:30:33.967138  346445 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:30:33.967179  346445 machine.go:96] duration metric: took 8.746652297s to provisionDockerMachine
	I1205 21:30:33.967195  346445 start.go:293] postStartSetup for "kubernetes-upgrade-055769" (driver="kvm2")
	I1205 21:30:33.967209  346445 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:30:33.967232  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:33.967666  346445 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:30:33.967704  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:33.971137  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:33.971693  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:33.971725  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:33.971967  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:33.972199  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:33.972425  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:33.972619  346445 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:30:34.061104  346445 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:30:34.065826  346445 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:30:34.065852  346445 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:30:34.065942  346445 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:30:34.066028  346445 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:30:34.066120  346445 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:30:34.077493  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:30:34.106743  346445 start.go:296] duration metric: took 139.52535ms for postStartSetup
	I1205 21:30:34.106812  346445 fix.go:56] duration metric: took 8.911726549s for fixHost
	I1205 21:30:34.106842  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:34.109939  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.110298  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:34.110338  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.110637  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:34.110881  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:34.111023  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:34.111200  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:34.111362  346445 main.go:141] libmachine: Using SSH client type: native
	I1205 21:30:34.111666  346445 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:30:34.111682  346445 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:30:34.309390  346445 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434234.297194252
	
	I1205 21:30:34.309428  346445 fix.go:216] guest clock: 1733434234.297194252
	I1205 21:30:34.309442  346445 fix.go:229] Guest: 2024-12-05 21:30:34.297194252 +0000 UTC Remote: 2024-12-05 21:30:34.106817483 +0000 UTC m=+46.877209744 (delta=190.376769ms)
	I1205 21:30:34.309478  346445 fix.go:200] guest clock delta is within tolerance: 190.376769ms
	I1205 21:30:34.309487  346445 start.go:83] releasing machines lock for "kubernetes-upgrade-055769", held for 9.114437201s
	I1205 21:30:34.309549  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:34.309948  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:30:34.314034  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.314481  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:34.314584  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.315039  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:34.315705  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:34.315920  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:30:34.316005  346445 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:30:34.316060  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:34.316132  346445 ssh_runner.go:195] Run: cat /version.json
	I1205 21:30:34.316157  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:30:34.319257  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.319489  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.319809  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:34.319848  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:34.319912  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.319945  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:34.320153  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:34.320206  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:30:34.320364  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:34.320408  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:30:34.320503  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:34.320613  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:30:34.320685  346445 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:30:34.320783  346445 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:30:34.658918  346445 ssh_runner.go:195] Run: systemctl --version
	I1205 21:30:34.702539  346445 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:30:35.425164  346445 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:30:35.485686  346445 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:30:35.485789  346445 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:30:35.535193  346445 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 21:30:35.535299  346445 start.go:495] detecting cgroup driver to use...
	I1205 21:30:35.535406  346445 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:30:35.596014  346445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:30:35.642332  346445 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:30:35.642415  346445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:30:35.688617  346445 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:30:35.730913  346445 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:30:35.948768  346445 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:30:36.220370  346445 docker.go:233] disabling docker service ...
	I1205 21:30:36.220474  346445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:30:36.273788  346445 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:30:36.302943  346445 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:30:36.542291  346445 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:30:36.800689  346445 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:30:36.835463  346445 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:30:36.873798  346445 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:30:36.873884  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:36.886991  346445 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:30:36.887069  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:36.902422  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:36.920979  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:36.941151  346445 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:30:36.969197  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:37.003664  346445 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:37.021276  346445 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:30:37.035020  346445 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:30:37.047853  346445 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:30:37.070175  346445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:35.482372  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:37.980802  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:34.845482  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.845305  347094 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893/id_rsa...
	I1205 21:30:34.995415  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.995244  347094 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893/enable-default-cni-279893.rawdisk...
	I1205 21:30:34.995447  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Writing magic tar header
	I1205 21:30:34.995468  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Writing SSH key tar header
	I1205 21:30:34.995498  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:34.995459  347094 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893 ...
	I1205 21:30:34.995659  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893
	I1205 21:30:34.995703  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 21:30:34.995718  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:30:34.995729  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 21:30:34.995742  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 21:30:34.995751  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home/jenkins
	I1205 21:30:34.995762  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Checking permissions on dir: /home
	I1205 21:30:34.995771  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | Skipping /home - not owner
	I1205 21:30:34.995793  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893 (perms=drwx------)
	I1205 21:30:34.995805  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 21:30:34.995816  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 21:30:34.995827  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 21:30:34.995838  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 21:30:34.995847  346676 main.go:141] libmachine: (enable-default-cni-279893) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 21:30:34.995857  346676 main.go:141] libmachine: (enable-default-cni-279893) Creating domain...
	I1205 21:30:34.997465  346676 main.go:141] libmachine: (enable-default-cni-279893) define libvirt domain using xml: 
	I1205 21:30:34.997492  346676 main.go:141] libmachine: (enable-default-cni-279893) <domain type='kvm'>
	I1205 21:30:34.997504  346676 main.go:141] libmachine: (enable-default-cni-279893)   <name>enable-default-cni-279893</name>
	I1205 21:30:34.997511  346676 main.go:141] libmachine: (enable-default-cni-279893)   <memory unit='MiB'>3072</memory>
	I1205 21:30:34.997518  346676 main.go:141] libmachine: (enable-default-cni-279893)   <vcpu>2</vcpu>
	I1205 21:30:34.997528  346676 main.go:141] libmachine: (enable-default-cni-279893)   <features>
	I1205 21:30:34.997539  346676 main.go:141] libmachine: (enable-default-cni-279893)     <acpi/>
	I1205 21:30:34.997546  346676 main.go:141] libmachine: (enable-default-cni-279893)     <apic/>
	I1205 21:30:34.997552  346676 main.go:141] libmachine: (enable-default-cni-279893)     <pae/>
	I1205 21:30:34.997566  346676 main.go:141] libmachine: (enable-default-cni-279893)     
	I1205 21:30:34.997575  346676 main.go:141] libmachine: (enable-default-cni-279893)   </features>
	I1205 21:30:34.997582  346676 main.go:141] libmachine: (enable-default-cni-279893)   <cpu mode='host-passthrough'>
	I1205 21:30:34.997589  346676 main.go:141] libmachine: (enable-default-cni-279893)   
	I1205 21:30:34.997596  346676 main.go:141] libmachine: (enable-default-cni-279893)   </cpu>
	I1205 21:30:34.997603  346676 main.go:141] libmachine: (enable-default-cni-279893)   <os>
	I1205 21:30:34.997610  346676 main.go:141] libmachine: (enable-default-cni-279893)     <type>hvm</type>
	I1205 21:30:34.997618  346676 main.go:141] libmachine: (enable-default-cni-279893)     <boot dev='cdrom'/>
	I1205 21:30:34.997625  346676 main.go:141] libmachine: (enable-default-cni-279893)     <boot dev='hd'/>
	I1205 21:30:34.997633  346676 main.go:141] libmachine: (enable-default-cni-279893)     <bootmenu enable='no'/>
	I1205 21:30:34.997640  346676 main.go:141] libmachine: (enable-default-cni-279893)   </os>
	I1205 21:30:34.997648  346676 main.go:141] libmachine: (enable-default-cni-279893)   <devices>
	I1205 21:30:34.997656  346676 main.go:141] libmachine: (enable-default-cni-279893)     <disk type='file' device='cdrom'>
	I1205 21:30:34.997671  346676 main.go:141] libmachine: (enable-default-cni-279893)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893/boot2docker.iso'/>
	I1205 21:30:34.997679  346676 main.go:141] libmachine: (enable-default-cni-279893)       <target dev='hdc' bus='scsi'/>
	I1205 21:30:34.997688  346676 main.go:141] libmachine: (enable-default-cni-279893)       <readonly/>
	I1205 21:30:34.997694  346676 main.go:141] libmachine: (enable-default-cni-279893)     </disk>
	I1205 21:30:34.997705  346676 main.go:141] libmachine: (enable-default-cni-279893)     <disk type='file' device='disk'>
	I1205 21:30:34.997715  346676 main.go:141] libmachine: (enable-default-cni-279893)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 21:30:34.997729  346676 main.go:141] libmachine: (enable-default-cni-279893)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/enable-default-cni-279893/enable-default-cni-279893.rawdisk'/>
	I1205 21:30:34.997737  346676 main.go:141] libmachine: (enable-default-cni-279893)       <target dev='hda' bus='virtio'/>
	I1205 21:30:34.997745  346676 main.go:141] libmachine: (enable-default-cni-279893)     </disk>
	I1205 21:30:34.997752  346676 main.go:141] libmachine: (enable-default-cni-279893)     <interface type='network'>
	I1205 21:30:34.997764  346676 main.go:141] libmachine: (enable-default-cni-279893)       <source network='mk-enable-default-cni-279893'/>
	I1205 21:30:34.997771  346676 main.go:141] libmachine: (enable-default-cni-279893)       <model type='virtio'/>
	I1205 21:30:34.997779  346676 main.go:141] libmachine: (enable-default-cni-279893)     </interface>
	I1205 21:30:34.997787  346676 main.go:141] libmachine: (enable-default-cni-279893)     <interface type='network'>
	I1205 21:30:34.997795  346676 main.go:141] libmachine: (enable-default-cni-279893)       <source network='default'/>
	I1205 21:30:34.997807  346676 main.go:141] libmachine: (enable-default-cni-279893)       <model type='virtio'/>
	I1205 21:30:34.997816  346676 main.go:141] libmachine: (enable-default-cni-279893)     </interface>
	I1205 21:30:34.997823  346676 main.go:141] libmachine: (enable-default-cni-279893)     <serial type='pty'>
	I1205 21:30:34.997831  346676 main.go:141] libmachine: (enable-default-cni-279893)       <target port='0'/>
	I1205 21:30:34.997838  346676 main.go:141] libmachine: (enable-default-cni-279893)     </serial>
	I1205 21:30:34.997852  346676 main.go:141] libmachine: (enable-default-cni-279893)     <console type='pty'>
	I1205 21:30:34.997859  346676 main.go:141] libmachine: (enable-default-cni-279893)       <target type='serial' port='0'/>
	I1205 21:30:34.997867  346676 main.go:141] libmachine: (enable-default-cni-279893)     </console>
	I1205 21:30:34.997874  346676 main.go:141] libmachine: (enable-default-cni-279893)     <rng model='virtio'>
	I1205 21:30:34.997883  346676 main.go:141] libmachine: (enable-default-cni-279893)       <backend model='random'>/dev/random</backend>
	I1205 21:30:34.997890  346676 main.go:141] libmachine: (enable-default-cni-279893)     </rng>
	I1205 21:30:34.997897  346676 main.go:141] libmachine: (enable-default-cni-279893)     
	I1205 21:30:34.997918  346676 main.go:141] libmachine: (enable-default-cni-279893)     
	I1205 21:30:34.997926  346676 main.go:141] libmachine: (enable-default-cni-279893)   </devices>
	I1205 21:30:34.997933  346676 main.go:141] libmachine: (enable-default-cni-279893) </domain>
	I1205 21:30:34.997944  346676 main.go:141] libmachine: (enable-default-cni-279893) 
	I1205 21:30:35.007218  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:c2:f9:3d in network default
	I1205 21:30:35.008110  346676 main.go:141] libmachine: (enable-default-cni-279893) Ensuring networks are active...
	I1205 21:30:35.008144  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:35.009135  346676 main.go:141] libmachine: (enable-default-cni-279893) Ensuring network default is active
	I1205 21:30:35.009595  346676 main.go:141] libmachine: (enable-default-cni-279893) Ensuring network mk-enable-default-cni-279893 is active
	I1205 21:30:35.010273  346676 main.go:141] libmachine: (enable-default-cni-279893) Getting domain xml...
	I1205 21:30:35.011212  346676 main.go:141] libmachine: (enable-default-cni-279893) Creating domain...
	I1205 21:30:36.831947  346676 main.go:141] libmachine: (enable-default-cni-279893) Waiting to get IP...
	I1205 21:30:36.832737  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:36.833298  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:36.833347  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:36.833302  347094 retry.go:31] will retry after 209.088011ms: waiting for machine to come up
	I1205 21:30:37.044019  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:37.044830  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:37.044860  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:37.044721  347094 retry.go:31] will retry after 325.228011ms: waiting for machine to come up
	I1205 21:30:37.371431  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:37.372292  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:37.372420  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:37.372367  347094 retry.go:31] will retry after 382.978126ms: waiting for machine to come up
	I1205 21:30:37.757194  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:37.757716  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:37.757745  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:37.757663  347094 retry.go:31] will retry after 564.122716ms: waiting for machine to come up
	I1205 21:30:38.323736  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:38.324263  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:38.324318  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:38.324204  347094 retry.go:31] will retry after 556.676286ms: waiting for machine to come up
	I1205 21:30:38.882068  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:38.882628  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:38.882672  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:38.882564  347094 retry.go:31] will retry after 785.681992ms: waiting for machine to come up
	I1205 21:30:35.309427  345195 out.go:235]   - Booting up control plane ...
	I1205 21:30:35.309578  345195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:30:35.309683  345195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:30:35.310185  345195 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:30:35.334943  345195 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:30:35.347294  345195 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:30:35.347476  345195 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:30:35.542281  345195 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:30:35.542525  345195 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:30:36.544217  345195 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001908259s
	I1205 21:30:36.544341  345195 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:30:37.332766  346445 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:30:42.546250  345195 kubeadm.go:310] [api-check] The API server is healthy after 6.001618286s
	I1205 21:30:42.561970  345195 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:30:42.584827  345195 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:30:42.629306  345195 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:30:42.629570  345195 kubeadm.go:310] [mark-control-plane] Marking the node custom-flannel-279893 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:30:42.642088  345195 kubeadm.go:310] [bootstrap-token] Using token: c33v1g.dz5c8xysy7geft3h
	I1205 21:30:39.986703  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:42.483293  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:42.643624  345195 out.go:235]   - Configuring RBAC rules ...
	I1205 21:30:42.643785  345195 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:30:42.648698  345195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:30:42.660318  345195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:30:42.664452  345195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:30:42.668102  345195 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:30:42.671433  345195 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:30:42.953482  345195 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:30:43.395449  345195 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:30:43.949086  345195 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:30:43.949130  345195 kubeadm.go:310] 
	I1205 21:30:43.949212  345195 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:30:43.949223  345195 kubeadm.go:310] 
	I1205 21:30:43.949368  345195 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:30:43.949400  345195 kubeadm.go:310] 
	I1205 21:30:43.949441  345195 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:30:43.949538  345195 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:30:43.949610  345195 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:30:43.949616  345195 kubeadm.go:310] 
	I1205 21:30:43.949697  345195 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:30:43.949707  345195 kubeadm.go:310] 
	I1205 21:30:43.949780  345195 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:30:43.949790  345195 kubeadm.go:310] 
	I1205 21:30:43.949880  345195 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:30:43.950025  345195 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:30:43.950134  345195 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:30:43.950146  345195 kubeadm.go:310] 
	I1205 21:30:43.950263  345195 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:30:43.950405  345195 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:30:43.950417  345195 kubeadm.go:310] 
	I1205 21:30:43.950533  345195 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token c33v1g.dz5c8xysy7geft3h \
	I1205 21:30:43.950713  345195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:30:43.950749  345195 kubeadm.go:310] 	--control-plane 
	I1205 21:30:43.950758  345195 kubeadm.go:310] 
	I1205 21:30:43.950892  345195 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:30:43.950912  345195 kubeadm.go:310] 
	I1205 21:30:43.951014  345195 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token c33v1g.dz5c8xysy7geft3h \
	I1205 21:30:43.951144  345195 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:30:43.951771  345195 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:30:43.951801  345195 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1205 21:30:43.954406  345195 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1205 21:30:39.670363  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:39.670903  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:39.670926  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:39.670835  347094 retry.go:31] will retry after 987.103499ms: waiting for machine to come up
	I1205 21:30:40.659289  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:40.659936  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:40.659972  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:40.659871  347094 retry.go:31] will retry after 984.912307ms: waiting for machine to come up
	I1205 21:30:41.646326  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:41.646919  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:41.646955  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:41.646847  347094 retry.go:31] will retry after 1.391449467s: waiting for machine to come up
	I1205 21:30:43.040415  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:43.040914  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:43.040960  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:43.040885  347094 retry.go:31] will retry after 1.482496802s: waiting for machine to come up
	I1205 21:30:44.525674  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:44.526360  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:44.526389  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:44.526301  347094 retry.go:31] will retry after 2.12620105s: waiting for machine to come up
	I1205 21:30:43.955684  345195 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1205 21:30:43.955753  345195 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1205 21:30:43.962503  345195 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1205 21:30:43.962543  345195 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1205 21:30:43.997575  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1205 21:30:44.431637  345195 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:30:44.431785  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:44.431785  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-279893 minikube.k8s.io/updated_at=2024_12_05T21_30_44_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=custom-flannel-279893 minikube.k8s.io/primary=true
	I1205 21:30:44.466774  345195 ops.go:34] apiserver oom_adj: -16
	I1205 21:30:44.679157  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:45.179757  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:45.679993  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:46.180156  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:46.680080  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:47.179720  345195 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:30:47.272673  345195 kubeadm.go:1113] duration metric: took 2.840988228s to wait for elevateKubeSystemPrivileges
	I1205 21:30:47.272722  345195 kubeadm.go:394] duration metric: took 15.116165855s to StartCluster
	I1205 21:30:47.272749  345195 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:47.272857  345195 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:30:47.274835  345195 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:47.275168  345195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1205 21:30:47.275168  345195 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.61.54 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:30:47.275234  345195 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:30:47.275370  345195 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-279893"
	I1205 21:30:47.275391  345195 addons.go:234] Setting addon storage-provisioner=true in "custom-flannel-279893"
	I1205 21:30:47.275417  345195 config.go:182] Loaded profile config "custom-flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:30:47.275442  345195 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-279893"
	I1205 21:30:47.275464  345195 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-279893"
	I1205 21:30:47.275958  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:47.276006  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:47.276292  345195 host.go:66] Checking if "custom-flannel-279893" exists ...
	I1205 21:30:47.276783  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:47.276837  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:47.277056  345195 out.go:177] * Verifying Kubernetes components...
	I1205 21:30:47.278385  345195 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:47.296603  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37575
	I1205 21:30:47.297078  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:47.297634  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:30:47.297659  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:47.298230  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:47.298888  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:47.298944  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:47.299788  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41327
	I1205 21:30:47.300206  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:47.300746  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:30:47.300765  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:47.301151  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:47.301372  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetState
	I1205 21:30:47.305637  345195 addons.go:234] Setting addon default-storageclass=true in "custom-flannel-279893"
	I1205 21:30:47.305694  345195 host.go:66] Checking if "custom-flannel-279893" exists ...
	I1205 21:30:47.306131  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:47.306181  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:47.321335  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41615
	I1205 21:30:47.322328  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:47.323091  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:30:47.323121  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:47.323627  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:47.327025  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41475
	I1205 21:30:47.327526  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:47.328130  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:30:47.328144  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:47.328514  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:47.328951  345195 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:30:47.328977  345195 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:30:47.330230  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetState
	I1205 21:30:47.332497  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:47.334903  345195 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:30:44.483411  344606 pod_ready.go:103] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"False"
	I1205 21:30:45.484231  344606 pod_ready.go:93] pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.484260  344606 pod_ready.go:82] duration metric: took 18.011952665s for pod "calico-kube-controllers-d4dc4cc65-7n745" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.484275  344606 pod_ready.go:79] waiting up to 15m0s for pod "calico-node-w6rw8" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.491602  344606 pod_ready.go:93] pod "calico-node-w6rw8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.491635  344606 pod_ready.go:82] duration metric: took 7.352287ms for pod "calico-node-w6rw8" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.491651  344606 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-6tj29" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.497803  344606 pod_ready.go:93] pod "coredns-7c65d6cfc9-6tj29" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.497832  344606 pod_ready.go:82] duration metric: took 6.172024ms for pod "coredns-7c65d6cfc9-6tj29" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.497846  344606 pod_ready.go:79] waiting up to 15m0s for pod "etcd-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.503460  344606 pod_ready.go:93] pod "etcd-calico-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.503488  344606 pod_ready.go:82] duration metric: took 5.632872ms for pod "etcd-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.503501  344606 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.509188  344606 pod_ready.go:93] pod "kube-apiserver-calico-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.509217  344606 pod_ready.go:82] duration metric: took 5.706792ms for pod "kube-apiserver-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.509231  344606 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.877306  344606 pod_ready.go:93] pod "kube-controller-manager-calico-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:45.877337  344606 pod_ready.go:82] duration metric: took 368.096928ms for pod "kube-controller-manager-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:45.877353  344606 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-c6srw" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:46.278792  344606 pod_ready.go:93] pod "kube-proxy-c6srw" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:46.278820  344606 pod_ready.go:82] duration metric: took 401.459145ms for pod "kube-proxy-c6srw" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:46.278830  344606 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:46.677129  344606 pod_ready.go:93] pod "kube-scheduler-calico-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:30:46.677162  344606 pod_ready.go:82] duration metric: took 398.323184ms for pod "kube-scheduler-calico-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:30:46.677177  344606 pod_ready.go:39] duration metric: took 19.221462547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:30:46.677198  344606 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:30:46.677276  344606 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:30:46.700779  344606 api_server.go:72] duration metric: took 28.649780886s to wait for apiserver process to appear ...
	I1205 21:30:46.700818  344606 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:30:46.700847  344606 api_server.go:253] Checking apiserver healthz at https://192.168.39.206:8443/healthz ...
	I1205 21:30:46.706494  344606 api_server.go:279] https://192.168.39.206:8443/healthz returned 200:
	ok
	I1205 21:30:46.707871  344606 api_server.go:141] control plane version: v1.31.2
	I1205 21:30:46.707902  344606 api_server.go:131] duration metric: took 7.076604ms to wait for apiserver health ...
	I1205 21:30:46.707913  344606 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:30:46.882088  344606 system_pods.go:59] 9 kube-system pods found
	I1205 21:30:46.882137  344606 system_pods.go:61] "calico-kube-controllers-d4dc4cc65-7n745" [9f3f7bde-ccfe-40d7-b056-63fa88de0ed4] Running
	I1205 21:30:46.882148  344606 system_pods.go:61] "calico-node-w6rw8" [abadbfb2-8173-4e2b-a5ba-8876045a0b76] Running
	I1205 21:30:46.882155  344606 system_pods.go:61] "coredns-7c65d6cfc9-6tj29" [608ac447-eed7-4cd1-928d-99de2e12d29b] Running
	I1205 21:30:46.882160  344606 system_pods.go:61] "etcd-calico-279893" [ff5ca2c3-60b4-4795-8cef-5fa5f362fb29] Running
	I1205 21:30:46.882166  344606 system_pods.go:61] "kube-apiserver-calico-279893" [ed96a6ca-2a09-4fae-957c-0ff9450196f8] Running
	I1205 21:30:46.882171  344606 system_pods.go:61] "kube-controller-manager-calico-279893" [46fd18b0-38f8-4c2d-96e9-700c69327d83] Running
	I1205 21:30:46.882176  344606 system_pods.go:61] "kube-proxy-c6srw" [e770e73d-72d7-4e39-a9b2-14baf99de0e8] Running
	I1205 21:30:46.882181  344606 system_pods.go:61] "kube-scheduler-calico-279893" [c93a06c9-970e-40e4-b95b-0fcd13db44eb] Running
	I1205 21:30:46.882187  344606 system_pods.go:61] "storage-provisioner" [851981ab-8f2d-4eca-9844-14eac749e456] Running
	I1205 21:30:46.882200  344606 system_pods.go:74] duration metric: took 174.278535ms to wait for pod list to return data ...
	I1205 21:30:46.882211  344606 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:30:47.076909  344606 default_sa.go:45] found service account: "default"
	I1205 21:30:47.076941  344606 default_sa.go:55] duration metric: took 194.722836ms for default service account to be created ...
	I1205 21:30:47.076950  344606 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:30:47.286693  344606 system_pods.go:86] 9 kube-system pods found
	I1205 21:30:47.286737  344606 system_pods.go:89] "calico-kube-controllers-d4dc4cc65-7n745" [9f3f7bde-ccfe-40d7-b056-63fa88de0ed4] Running
	I1205 21:30:47.286748  344606 system_pods.go:89] "calico-node-w6rw8" [abadbfb2-8173-4e2b-a5ba-8876045a0b76] Running
	I1205 21:30:47.286753  344606 system_pods.go:89] "coredns-7c65d6cfc9-6tj29" [608ac447-eed7-4cd1-928d-99de2e12d29b] Running
	I1205 21:30:47.286759  344606 system_pods.go:89] "etcd-calico-279893" [ff5ca2c3-60b4-4795-8cef-5fa5f362fb29] Running
	I1205 21:30:47.286765  344606 system_pods.go:89] "kube-apiserver-calico-279893" [ed96a6ca-2a09-4fae-957c-0ff9450196f8] Running
	I1205 21:30:47.286771  344606 system_pods.go:89] "kube-controller-manager-calico-279893" [46fd18b0-38f8-4c2d-96e9-700c69327d83] Running
	I1205 21:30:47.286776  344606 system_pods.go:89] "kube-proxy-c6srw" [e770e73d-72d7-4e39-a9b2-14baf99de0e8] Running
	I1205 21:30:47.286782  344606 system_pods.go:89] "kube-scheduler-calico-279893" [c93a06c9-970e-40e4-b95b-0fcd13db44eb] Running
	I1205 21:30:47.286788  344606 system_pods.go:89] "storage-provisioner" [851981ab-8f2d-4eca-9844-14eac749e456] Running
	I1205 21:30:47.286798  344606 system_pods.go:126] duration metric: took 209.841755ms to wait for k8s-apps to be running ...
	I1205 21:30:47.286813  344606 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:30:47.286887  344606 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:30:47.323844  344606 system_svc.go:56] duration metric: took 37.020356ms WaitForService to wait for kubelet
	I1205 21:30:47.323871  344606 kubeadm.go:582] duration metric: took 29.27288012s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:30:47.323893  344606 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:30:47.480253  344606 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:30:47.480331  344606 node_conditions.go:123] node cpu capacity is 2
	I1205 21:30:47.480349  344606 node_conditions.go:105] duration metric: took 156.449604ms to run NodePressure ...
	I1205 21:30:47.480366  344606 start.go:241] waiting for startup goroutines ...
	I1205 21:30:47.480376  344606 start.go:246] waiting for cluster config update ...
	I1205 21:30:47.480391  344606 start.go:255] writing updated cluster config ...
	I1205 21:30:47.480781  344606 ssh_runner.go:195] Run: rm -f paused
	I1205 21:30:47.576295  344606 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:30:47.578179  344606 out.go:177] * Done! kubectl is now configured to use "calico-279893" cluster and "default" namespace by default
	I1205 21:30:48.007041  346445 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.674228412s)
	I1205 21:30:48.007078  346445 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:30:48.007130  346445 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:30:48.013684  346445 start.go:563] Will wait 60s for crictl version
	I1205 21:30:48.013770  346445 ssh_runner.go:195] Run: which crictl
	I1205 21:30:48.019154  346445 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:30:48.060731  346445 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:30:48.060825  346445 ssh_runner.go:195] Run: crio --version
	I1205 21:30:48.095333  346445 ssh_runner.go:195] Run: crio --version
	I1205 21:30:48.129575  346445 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:30:47.336504  345195 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:30:47.336537  345195 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:30:47.336565  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:47.340314  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:47.341264  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:47.341288  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:47.341517  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:47.341736  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:47.341877  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:47.342029  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:47.349417  345195 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34865
	I1205 21:30:47.351181  345195 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:30:47.351836  345195 main.go:141] libmachine: Using API Version  1
	I1205 21:30:47.351858  345195 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:30:47.352336  345195 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:30:47.352832  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetState
	I1205 21:30:47.358079  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .DriverName
	I1205 21:30:47.359328  345195 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:30:47.359351  345195 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:30:47.359379  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHHostname
	I1205 21:30:47.364264  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:47.368751  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e1:94:85", ip: ""} in network mk-custom-flannel-279893: {Iface:virbr3 ExpiryTime:2024-12-05 22:30:12 +0000 UTC Type:0 Mac:52:54:00:e1:94:85 Iaid: IPaddr:192.168.61.54 Prefix:24 Hostname:custom-flannel-279893 Clientid:01:52:54:00:e1:94:85}
	I1205 21:30:47.368784  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | domain custom-flannel-279893 has defined IP address 192.168.61.54 and MAC address 52:54:00:e1:94:85 in network mk-custom-flannel-279893
	I1205 21:30:47.369113  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHPort
	I1205 21:30:47.369830  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHKeyPath
	I1205 21:30:47.373513  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .GetSSHUsername
	I1205 21:30:47.373765  345195 sshutil.go:53] new ssh client: &{IP:192.168.61.54 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/custom-flannel-279893/id_rsa Username:docker}
	I1205 21:30:47.579166  345195 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1205 21:30:47.616615  345195 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:30:47.820360  345195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:30:47.869206  345195 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:30:48.216033  345195 start.go:971] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1205 21:30:48.217259  345195 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-279893" to be "Ready" ...
	I1205 21:30:48.621446  345195 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:48.621487  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Close
	I1205 21:30:48.621570  345195 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:48.621605  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Close
	I1205 21:30:48.621848  345195 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:48.621869  345195 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:48.621879  345195 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:48.621887  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Close
	I1205 21:30:48.622081  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Closing plugin on server side
	I1205 21:30:48.622127  345195 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:48.622139  345195 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:48.622146  345195 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:48.622154  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Close
	I1205 21:30:48.622284  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Closing plugin on server side
	I1205 21:30:48.622331  345195 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:48.622344  345195 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:48.622437  345195 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:48.622448  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Closing plugin on server side
	I1205 21:30:48.622451  345195 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:48.651293  345195 main.go:141] libmachine: Making call to close driver server
	I1205 21:30:48.651330  345195 main.go:141] libmachine: (custom-flannel-279893) Calling .Close
	I1205 21:30:48.651773  345195 main.go:141] libmachine: (custom-flannel-279893) DBG | Closing plugin on server side
	I1205 21:30:48.651813  345195 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:30:48.651825  345195 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:30:48.654926  345195 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I1205 21:30:46.655237  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | domain enable-default-cni-279893 has defined MAC address 52:54:00:ca:06:b8 in network mk-enable-default-cni-279893
	I1205 21:30:46.655796  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | unable to find current IP address of domain enable-default-cni-279893 in network mk-enable-default-cni-279893
	I1205 21:30:46.655848  346676 main.go:141] libmachine: (enable-default-cni-279893) DBG | I1205 21:30:46.655748  347094 retry.go:31] will retry after 2.974487567s: waiting for machine to come up
	I1205 21:30:48.656364  345195 addons.go:510] duration metric: took 1.381137046s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1205 21:30:48.722066  345195 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-279893" context rescaled to 1 replicas
	I1205 21:30:48.131045  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:30:48.134326  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:48.134755  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:30:48.134794  346445 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:30:48.134999  346445 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:30:48.140011  346445 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:30:48.140176  346445 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:30:48.140250  346445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:30:48.189058  346445 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:30:48.189088  346445 crio.go:433] Images already preloaded, skipping extraction
	I1205 21:30:48.189151  346445 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:30:48.238707  346445 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:30:48.238741  346445 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:30:48.238752  346445 kubeadm.go:934] updating node { 192.168.50.100 8443 v1.31.2 crio true true} ...
	I1205 21:30:48.238882  346445 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-055769 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.100
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:30:48.238974  346445 ssh_runner.go:195] Run: crio config
	I1205 21:30:48.298530  346445 cni.go:84] Creating CNI manager for ""
	I1205 21:30:48.298559  346445 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:30:48.298572  346445 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:30:48.298603  346445 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.100 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-055769 NodeName:kubernetes-upgrade-055769 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.100"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.100 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:30:48.298796  346445 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.100
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-055769"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.100"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.100"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:30:48.298877  346445 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:30:48.309973  346445 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:30:48.310076  346445 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:30:48.321337  346445 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I1205 21:30:48.339835  346445 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:30:48.357855  346445 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2305 bytes)
	I1205 21:30:48.376343  346445 ssh_runner.go:195] Run: grep 192.168.50.100	control-plane.minikube.internal$ /etc/hosts
	I1205 21:30:48.381394  346445 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:30:48.541675  346445 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:30:48.558606  346445 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769 for IP: 192.168.50.100
	I1205 21:30:48.558639  346445 certs.go:194] generating shared ca certs ...
	I1205 21:30:48.558669  346445 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:30:48.558905  346445 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:30:48.558963  346445 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:30:48.558976  346445 certs.go:256] generating profile certs ...
	I1205 21:30:48.559084  346445 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/client.key
	I1205 21:30:48.559215  346445 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key.e9e33142
	I1205 21:30:48.559306  346445 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key
	I1205 21:30:48.559470  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:30:48.559511  346445 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:30:48.559525  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:30:48.559558  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:30:48.559590  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:30:48.559618  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:30:48.559673  346445 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:30:48.560497  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:30:48.586791  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:30:48.614558  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:30:48.643902  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:30:48.677035  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1205 21:30:48.703977  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:30:48.732218  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:30:48.761571  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:30:48.790795  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:30:48.819082  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:30:48.847193  346445 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:30:48.873482  346445 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:30:48.893876  346445 ssh_runner.go:195] Run: openssl version
	I1205 21:30:48.903996  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:30:48.917456  346445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:30:48.922484  346445 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:30:48.922570  346445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:30:48.928485  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:30:48.938850  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:30:48.950837  346445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:30:48.955839  346445 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:30:48.955925  346445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:30:48.961782  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:30:48.972758  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:30:48.985525  346445 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:48.992273  346445 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:48.992346  346445 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:30:49.000628  346445 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:30:49.015852  346445 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:30:49.022056  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:30:49.029705  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:30:49.037764  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:30:49.044830  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:30:49.052947  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:30:49.061413  346445 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:30:49.068086  346445 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-055769 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.31.2 ClusterName:kubernetes-upgrade-055769 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.100 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:30:49.068224  346445 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:30:49.068325  346445 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:30:49.113338  346445 cri.go:89] found id: "20eae25617140264f4f1a24d4e18699568489615474d13b6dc04e28ad4774885"
	I1205 21:30:49.113378  346445 cri.go:89] found id: "96ae052c2497a37a690ecd2a92b4d7b983febbd9a85e14354cf45b2861711764"
	I1205 21:30:49.113390  346445 cri.go:89] found id: "ea31e2e390dfe918af8e5d77e73dd7bcdd69e726a3b0a70756449a514cf36656"
	I1205 21:30:49.113396  346445 cri.go:89] found id: "5387f9e16f4dc81c9fb1cf50d78da860155dec64caea40ed79506f917933a044"
	I1205 21:30:49.113400  346445 cri.go:89] found id: "c5d8acad7ffd53e60d2f60ca8a6e9d3252f29717599a4a4d76a3aa6c57293ae9"
	I1205 21:30:49.113404  346445 cri.go:89] found id: "149bf273f37c06a44ce5b049211ad787812647a0e250952063d0e62ff18c0dfb"
	I1205 21:30:49.113408  346445 cri.go:89] found id: "6fef784f0d85e9be07d54a44c176320d8de4e05dfab7d3f1c3ad4db01d569d66"
	I1205 21:30:49.113412  346445 cri.go:89] found id: "f7d9f1875cddf0f5ce770356e6e8ea360f2884b12bfd52ee28facbbd542b60cf"
	I1205 21:30:49.113416  346445 cri.go:89] found id: "9cbbe6fc42fe6a267dc1d13067c845812428a0bd72124d38b9eeed112436c3a5"
	I1205 21:30:49.113424  346445 cri.go:89] found id: "7c17724227e1f451ba395a878a9e9f3758d868790ffd1e59c9b247e9d97f4727"
	I1205 21:30:49.113432  346445 cri.go:89] found id: "b7bfbeec9a6b5227a109d8cb9cded7cec0360553bc4c47ed1549ea4446317433"
	I1205 21:30:49.113436  346445 cri.go:89] found id: "fed79c15ee11b2e608e99e69ccece1bf98e261f203098370701be697b426504b"
	I1205 21:30:49.113441  346445 cri.go:89] found id: "0511bfc82fb323da715880032a830c213ad9bb0fe749ebfc8874fb5f7af36cb1"
	I1205 21:30:49.113445  346445 cri.go:89] found id: "0009e38f70c04cbeb9dc3f3f869734ea52cbc782aa28b083db1d0e40cf4b86c0"
	I1205 21:30:49.113453  346445 cri.go:89] found id: "6bd03f8bf8ed5cceb50e631c7f23ae38b2ce419e078dcce95d05a10b10c74a25"
	I1205 21:30:49.113458  346445 cri.go:89] found id: ""
	I1205 21:30:49.113508  346445 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-055769 -n kubernetes-upgrade-055769
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-055769 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-055769" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-055769
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-055769: (1.272774674s)
--- FAIL: TestKubernetesUpgrade (395.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (77.23s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-068873 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-068873 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m12.649213714s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-068873] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting "pause-068873" primary control-plane node in "pause-068873" cluster
	* Updating the running kvm2 "pause-068873" VM ...
	* Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	* Done! kubectl is now configured to use "pause-068873" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:28:16.478317  342771 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:28:16.478486  342771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:28:16.478497  342771 out.go:358] Setting ErrFile to fd 2...
	I1205 21:28:16.478511  342771 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:28:16.478827  342771 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:28:16.479493  342771 out.go:352] Setting JSON to false
	I1205 21:28:16.480586  342771 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15044,"bootTime":1733419052,"procs":213,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:28:16.480715  342771 start.go:139] virtualization: kvm guest
	I1205 21:28:16.542205  342771 out.go:177] * [pause-068873] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:28:16.625142  342771 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:28:16.625179  342771 notify.go:220] Checking for updates...
	I1205 21:28:16.718624  342771 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:28:16.876715  342771 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:28:16.998016  342771 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:28:17.036399  342771 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:28:17.056986  342771 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:28:17.140798  342771 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:28:17.141401  342771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:28:17.141463  342771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:28:17.158053  342771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44205
	I1205 21:28:17.158676  342771 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:28:17.159306  342771 main.go:141] libmachine: Using API Version  1
	I1205 21:28:17.159335  342771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:28:17.159753  342771 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:28:17.159982  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:17.160364  342771 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:28:17.160734  342771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:28:17.160790  342771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:28:17.177841  342771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41313
	I1205 21:28:17.178439  342771 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:28:17.178976  342771 main.go:141] libmachine: Using API Version  1
	I1205 21:28:17.179002  342771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:28:17.179369  342771 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:28:17.179598  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:17.364664  342771 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:28:17.445306  342771 start.go:297] selected driver: kvm2
	I1205 21:28:17.445383  342771 start.go:901] validating driver "kvm2" against &{Name:pause-068873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernetes
Version:v1.31.2 ClusterName:pause-068873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvid
ia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:28:17.445598  342771 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:28:17.446174  342771 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:28:17.446275  342771 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:28:17.469079  342771 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:28:17.470022  342771 cni.go:84] Creating CNI manager for ""
	I1205 21:28:17.470096  342771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:28:17.470172  342771 start.go:340] cluster config:
	{Name:pause-068873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:pause-068873 Namespace:default APIServerHAVIP: API
ServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false
registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:28:17.470384  342771 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:28:17.625718  342771 out.go:177] * Starting "pause-068873" primary control-plane node in "pause-068873" cluster
	I1205 21:28:17.751731  342771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:28:17.751821  342771 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:28:17.751833  342771 cache.go:56] Caching tarball of preloaded images
	I1205 21:28:17.751996  342771 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:28:17.752014  342771 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:28:17.752209  342771 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/config.json ...
	I1205 21:28:17.752498  342771 start.go:360] acquireMachinesLock for pause-068873: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:28:35.824791  342771 start.go:364] duration metric: took 18.072236831s to acquireMachinesLock for "pause-068873"
	I1205 21:28:35.824872  342771 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:28:35.824894  342771 fix.go:54] fixHost starting: 
	I1205 21:28:35.825365  342771 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:28:35.825426  342771 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:28:35.846704  342771 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41117
	I1205 21:28:35.847325  342771 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:28:35.848021  342771 main.go:141] libmachine: Using API Version  1
	I1205 21:28:35.848057  342771 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:28:35.848495  342771 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:28:35.848712  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:35.848890  342771 main.go:141] libmachine: (pause-068873) Calling .GetState
	I1205 21:28:35.850954  342771 fix.go:112] recreateIfNeeded on pause-068873: state=Running err=<nil>
	W1205 21:28:35.850983  342771 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:28:35.852601  342771 out.go:177] * Updating the running kvm2 "pause-068873" VM ...
	I1205 21:28:35.854080  342771 machine.go:93] provisionDockerMachine start ...
	I1205 21:28:35.854118  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:35.854414  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:35.857894  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:35.858384  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:35.858422  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:35.858667  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:35.858893  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:35.859149  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:35.859338  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:35.859590  342771 main.go:141] libmachine: Using SSH client type: native
	I1205 21:28:35.859855  342771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.229 22 <nil> <nil>}
	I1205 21:28:35.859875  342771 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:28:35.974242  342771 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-068873
	
	I1205 21:28:35.974295  342771 main.go:141] libmachine: (pause-068873) Calling .GetMachineName
	I1205 21:28:35.974650  342771 buildroot.go:166] provisioning hostname "pause-068873"
	I1205 21:28:35.974685  342771 main.go:141] libmachine: (pause-068873) Calling .GetMachineName
	I1205 21:28:35.974902  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:35.978237  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:35.980249  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:35.980285  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:35.980315  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:35.980512  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:35.980708  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:35.980861  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:35.981053  342771 main.go:141] libmachine: Using SSH client type: native
	I1205 21:28:35.981281  342771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.229 22 <nil> <nil>}
	I1205 21:28:35.981292  342771 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-068873 && echo "pause-068873" | sudo tee /etc/hostname
	I1205 21:28:36.115052  342771 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-068873
	
	I1205 21:28:36.115085  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:36.118610  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.119062  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:36.119097  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.119352  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:36.119562  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:36.119790  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:36.119956  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:36.120178  342771 main.go:141] libmachine: Using SSH client type: native
	I1205 21:28:36.120392  342771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.229 22 <nil> <nil>}
	I1205 21:28:36.120415  342771 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-068873' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-068873/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-068873' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:28:36.235999  342771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:28:36.236034  342771 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:28:36.236058  342771 buildroot.go:174] setting up certificates
	I1205 21:28:36.236070  342771 provision.go:84] configureAuth start
	I1205 21:28:36.236079  342771 main.go:141] libmachine: (pause-068873) Calling .GetMachineName
	I1205 21:28:36.236397  342771 main.go:141] libmachine: (pause-068873) Calling .GetIP
	I1205 21:28:36.239335  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.239760  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:36.239789  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.239936  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:36.242561  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.243005  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:36.243051  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.243244  342771 provision.go:143] copyHostCerts
	I1205 21:28:36.243317  342771 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:28:36.243337  342771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:28:36.243396  342771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:28:36.243481  342771 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:28:36.243489  342771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:28:36.243511  342771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:28:36.243597  342771 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:28:36.243609  342771 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:28:36.243631  342771 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:28:36.243687  342771 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.pause-068873 san=[127.0.0.1 192.168.72.229 localhost minikube pause-068873]
	I1205 21:28:36.314832  342771 provision.go:177] copyRemoteCerts
	I1205 21:28:36.314893  342771 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:28:36.314921  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:36.318224  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.318627  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:36.318667  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.318836  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:36.319024  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:36.319241  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:36.319424  342771 sshutil.go:53] new ssh client: &{IP:192.168.72.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/pause-068873/id_rsa Username:docker}
	I1205 21:28:36.413443  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:28:36.444583  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1205 21:28:36.475549  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:28:36.508574  342771 provision.go:87] duration metric: took 272.486631ms to configureAuth
	I1205 21:28:36.508618  342771 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:28:36.508915  342771 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:28:36.509041  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:36.511986  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.512367  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:36.512402  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:36.512717  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:36.512913  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:36.513118  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:36.513277  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:36.513471  342771 main.go:141] libmachine: Using SSH client type: native
	I1205 21:28:36.513653  342771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.229 22 <nil> <nil>}
	I1205 21:28:36.513667  342771 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:28:42.036596  342771 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:28:42.036627  342771 machine.go:96] duration metric: took 6.182522528s to provisionDockerMachine
	I1205 21:28:42.036641  342771 start.go:293] postStartSetup for "pause-068873" (driver="kvm2")
	I1205 21:28:42.036652  342771 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:28:42.036678  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:42.037094  342771 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:28:42.037126  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:42.040322  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.040797  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:42.040836  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.041063  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:42.041337  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:42.041516  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:42.041713  342771 sshutil.go:53] new ssh client: &{IP:192.168.72.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/pause-068873/id_rsa Username:docker}
	I1205 21:28:42.133852  342771 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:28:42.138696  342771 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:28:42.138741  342771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:28:42.138861  342771 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:28:42.138978  342771 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:28:42.139120  342771 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:28:42.151063  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:28:42.179212  342771 start.go:296] duration metric: took 142.554072ms for postStartSetup
	I1205 21:28:42.179271  342771 fix.go:56] duration metric: took 6.354384429s for fixHost
	I1205 21:28:42.179301  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:42.182481  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.182911  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:42.182943  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.183143  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:42.183348  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:42.183606  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:42.183751  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:42.183909  342771 main.go:141] libmachine: Using SSH client type: native
	I1205 21:28:42.184122  342771 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.229 22 <nil> <nil>}
	I1205 21:28:42.184135  342771 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:28:42.308475  342771 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434122.269713370
	
	I1205 21:28:42.308509  342771 fix.go:216] guest clock: 1733434122.269713370
	I1205 21:28:42.308520  342771 fix.go:229] Guest: 2024-12-05 21:28:42.26971337 +0000 UTC Remote: 2024-12-05 21:28:42.179277283 +0000 UTC m=+25.748388012 (delta=90.436087ms)
	I1205 21:28:42.308550  342771 fix.go:200] guest clock delta is within tolerance: 90.436087ms
	I1205 21:28:42.308558  342771 start.go:83] releasing machines lock for "pause-068873", held for 6.483713253s
	I1205 21:28:42.308583  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:42.308855  342771 main.go:141] libmachine: (pause-068873) Calling .GetIP
	I1205 21:28:42.312477  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.312903  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:42.312938  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.313144  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:42.313838  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:42.314095  342771 main.go:141] libmachine: (pause-068873) Calling .DriverName
	I1205 21:28:42.314220  342771 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:28:42.314281  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:42.315288  342771 ssh_runner.go:195] Run: cat /version.json
	I1205 21:28:42.315332  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHHostname
	I1205 21:28:42.318103  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.318483  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:42.318524  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.318622  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.318682  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:42.318876  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:42.319032  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:42.319140  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:42.319167  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:42.319168  342771 sshutil.go:53] new ssh client: &{IP:192.168.72.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/pause-068873/id_rsa Username:docker}
	I1205 21:28:42.319367  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHPort
	I1205 21:28:42.319510  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHKeyPath
	I1205 21:28:42.319681  342771 main.go:141] libmachine: (pause-068873) Calling .GetSSHUsername
	I1205 21:28:42.319849  342771 sshutil.go:53] new ssh client: &{IP:192.168.72.229 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/pause-068873/id_rsa Username:docker}
	I1205 21:28:42.422089  342771 ssh_runner.go:195] Run: systemctl --version
	I1205 21:28:42.428796  342771 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:28:42.582839  342771 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:28:42.588625  342771 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:28:42.588712  342771 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:28:42.599222  342771 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1205 21:28:42.599249  342771 start.go:495] detecting cgroup driver to use...
	I1205 21:28:42.599328  342771 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:28:42.618170  342771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:28:42.632967  342771 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:28:42.633038  342771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:28:42.647545  342771 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:28:42.662101  342771 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:28:42.794650  342771 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:28:42.925396  342771 docker.go:233] disabling docker service ...
	I1205 21:28:42.925472  342771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:28:42.942400  342771 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:28:42.956713  342771 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:28:43.091171  342771 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:28:43.220074  342771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:28:43.234279  342771 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:28:43.254494  342771 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:28:43.254578  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.264992  342771 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:28:43.265082  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.275542  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.290691  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.301967  342771 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:28:43.316272  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.327508  342771 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.341766  342771 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:28:43.352857  342771 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:28:43.363099  342771 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:28:43.373185  342771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:28:43.508650  342771 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:28:51.320880  342771 ssh_runner.go:235] Completed: sudo systemctl restart crio: (7.812174755s)
	I1205 21:28:51.320927  342771 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:28:51.320999  342771 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:28:51.326113  342771 start.go:563] Will wait 60s for crictl version
	I1205 21:28:51.326201  342771 ssh_runner.go:195] Run: which crictl
	I1205 21:28:51.330123  342771 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:28:51.363113  342771 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:28:51.363239  342771 ssh_runner.go:195] Run: crio --version
	I1205 21:28:51.392346  342771 ssh_runner.go:195] Run: crio --version
	I1205 21:28:51.423225  342771 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:28:51.424594  342771 main.go:141] libmachine: (pause-068873) Calling .GetIP
	I1205 21:28:51.427485  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:51.427739  342771 main.go:141] libmachine: (pause-068873) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:97:b3:fd", ip: ""} in network mk-pause-068873: {Iface:virbr4 ExpiryTime:2024-12-05 22:27:36 +0000 UTC Type:0 Mac:52:54:00:97:b3:fd Iaid: IPaddr:192.168.72.229 Prefix:24 Hostname:pause-068873 Clientid:01:52:54:00:97:b3:fd}
	I1205 21:28:51.427765  342771 main.go:141] libmachine: (pause-068873) DBG | domain pause-068873 has defined IP address 192.168.72.229 and MAC address 52:54:00:97:b3:fd in network mk-pause-068873
	I1205 21:28:51.428023  342771 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:28:51.432962  342771 kubeadm.go:883] updating cluster {Name:pause-068873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2
ClusterName:pause-068873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-p
lugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:28:51.433121  342771 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:28:51.433178  342771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:28:51.484582  342771 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:28:51.484608  342771 crio.go:433] Images already preloaded, skipping extraction
	I1205 21:28:51.484674  342771 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:28:51.523944  342771 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:28:51.523971  342771 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:28:51.523979  342771 kubeadm.go:934] updating node { 192.168.72.229 8443 v1.31.2 crio true true} ...
	I1205 21:28:51.524109  342771 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=pause-068873 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.229
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:pause-068873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:28:51.524203  342771 ssh_runner.go:195] Run: crio config
	I1205 21:28:51.581646  342771 cni.go:84] Creating CNI manager for ""
	I1205 21:28:51.581673  342771 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:28:51.581683  342771 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:28:51.581707  342771 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.229 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-068873 NodeName:pause-068873 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.229"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.229 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:28:51.581844  342771 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.229
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-068873"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.229"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.229"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:28:51.581943  342771 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:28:51.596221  342771 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:28:51.596334  342771 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:28:51.609442  342771 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I1205 21:28:51.628095  342771 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:28:51.649262  342771 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:28:51.668094  342771 ssh_runner.go:195] Run: grep 192.168.72.229	control-plane.minikube.internal$ /etc/hosts
	I1205 21:28:51.672339  342771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:28:51.813805  342771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:28:51.829038  342771 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873 for IP: 192.168.72.229
	I1205 21:28:51.829064  342771 certs.go:194] generating shared ca certs ...
	I1205 21:28:51.829080  342771 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:28:51.829252  342771 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:28:51.829298  342771 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:28:51.829309  342771 certs.go:256] generating profile certs ...
	I1205 21:28:51.829384  342771 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/client.key
	I1205 21:28:51.829448  342771 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/apiserver.key.2aeb6737
	I1205 21:28:51.829482  342771 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/proxy-client.key
	I1205 21:28:51.829598  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:28:51.829628  342771 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:28:51.829638  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:28:51.829659  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:28:51.829684  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:28:51.829705  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:28:51.829741  342771 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:28:51.830666  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:28:51.858826  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:28:51.887710  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:28:51.915414  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:28:51.948006  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1205 21:28:51.978594  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:28:52.004139  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:28:52.034637  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/pause-068873/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:28:52.064372  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:28:52.089656  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:28:52.116999  342771 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:28:52.147896  342771 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:28:52.172186  342771 ssh_runner.go:195] Run: openssl version
	I1205 21:28:52.180336  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:28:52.195492  342771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:28:52.201029  342771 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:28:52.201126  342771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:28:52.207976  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:28:52.217883  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:28:52.232861  342771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:28:52.238982  342771 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:28:52.239067  342771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:28:52.245655  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:28:52.256770  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:28:52.268593  342771 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:28:52.275056  342771 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:28:52.275146  342771 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:28:52.283578  342771 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:28:52.297925  342771 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:28:52.318693  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:28:52.375595  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:28:52.423259  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:28:52.440229  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:28:52.501633  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:28:52.562810  342771 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:28:52.594277  342771 kubeadm.go:392] StartCluster: {Name:pause-068873 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2048 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 Cl
usterName:pause-068873 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plug
in:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:28:52.594494  342771 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:28:52.594572  342771 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:28:52.809539  342771 cri.go:89] found id: "47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc"
	I1205 21:28:52.809573  342771 cri.go:89] found id: "3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146"
	I1205 21:28:52.809579  342771 cri.go:89] found id: "50723417a352ccc734226324a6fb0a86955746b09371c86152269d21195d3c72"
	I1205 21:28:52.809583  342771 cri.go:89] found id: "7840b15b71bc4fad4766f07db6b85e2ff1e23a4b5ebe153a2f05a41eac9be438"
	I1205 21:28:52.809588  342771 cri.go:89] found id: "33598dc3a27d61d45fef380a7941d9016fb0719242596bc89baa5e012caf640b"
	I1205 21:28:52.809592  342771 cri.go:89] found id: "abc320fb5084fbc8f5b33426aae524bc7f8d9c3db6b40c6f32a071948d72f477"
	I1205 21:28:52.809596  342771 cri.go:89] found id: ""
	I1205 21:28:52.809664  342771 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-068873 -n pause-068873
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-068873 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-068873 logs -n 25: (1.38055739s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo docker                           | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	| ssh     | -p auto-279893 sudo                                  | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo containerd                       | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo find                             | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo crio                             | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-279893                                       | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	| start   | -p calico-279893 --memory=3072                       | calico-279893             | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 pgrep -a                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:29:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:29:13.346989  344606 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:29:13.347166  344606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:13.347184  344606 out.go:358] Setting ErrFile to fd 2...
	I1205 21:29:13.347192  344606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:13.347480  344606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:29:13.348258  344606 out.go:352] Setting JSON to false
	I1205 21:29:13.349478  344606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15101,"bootTime":1733419052,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:29:13.349611  344606 start.go:139] virtualization: kvm guest
	I1205 21:29:13.351815  344606 out.go:177] * [calico-279893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:29:13.353637  344606 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:29:13.353680  344606 notify.go:220] Checking for updates...
	I1205 21:29:13.355200  344606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:29:13.356646  344606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:29:13.357998  344606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:29:13.359512  344606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:29:13.360868  344606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:29:13.363007  344606 config.go:182] Loaded profile config "kindnet-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363206  344606 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363415  344606 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363597  344606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:29:13.410235  344606 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:29:13.411520  344606 start.go:297] selected driver: kvm2
	I1205 21:29:13.411545  344606 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:29:13.411566  344606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:29:13.412423  344606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:13.412529  344606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:29:13.432563  344606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:29:13.432623  344606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 21:29:13.432840  344606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:13.432875  344606 cni.go:84] Creating CNI manager for "calico"
	I1205 21:29:13.432880  344606 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 21:29:13.432924  344606 start.go:340] cluster config:
	{Name:calico-279893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:29:13.433031  344606 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:13.438298  344606 out.go:177] * Starting "calico-279893" primary control-plane node in "calico-279893" cluster
	I1205 21:29:11.282348  342291 node_ready.go:53] node "kindnet-279893" has status "Ready":"False"
	I1205 21:29:12.286344  342291 node_ready.go:49] node "kindnet-279893" has status "Ready":"True"
	I1205 21:29:12.286375  342291 node_ready.go:38] duration metric: took 14.508260441s for node "kindnet-279893" to be "Ready" ...
	I1205 21:29:12.286389  342291 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:12.306219  342291 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.814329  342291 pod_ready.go:93] pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.814370  342291 pod_ready.go:82] duration metric: took 1.508109102s for pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.814386  342291 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.820608  342291 pod_ready.go:93] pod "etcd-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.820648  342291 pod_ready.go:82] duration metric: took 6.251844ms for pod "etcd-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.820667  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.826894  342291 pod_ready.go:93] pod "kube-apiserver-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.826931  342291 pod_ready.go:82] duration metric: took 6.251271ms for pod "kube-apiserver-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.826948  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.833177  342291 pod_ready.go:93] pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.833210  342291 pod_ready.go:82] duration metric: took 6.253076ms for pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.833223  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-bpf8v" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.883951  342291 pod_ready.go:93] pod "kube-proxy-bpf8v" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.883984  342291 pod_ready.go:82] duration metric: took 50.752128ms for pod "kube-proxy-bpf8v" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.884001  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:14.281848  342291 pod_ready.go:93] pod "kube-scheduler-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:14.281876  342291 pod_ready.go:82] duration metric: took 397.868078ms for pod "kube-scheduler-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:14.281894  342291 pod_ready.go:39] duration metric: took 1.995484315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:14.281941  342291 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:29:14.282005  342291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:29:14.297888  342291 api_server.go:72] duration metric: took 16.809082703s to wait for apiserver process to appear ...
	I1205 21:29:14.297945  342291 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:29:14.297976  342291 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1205 21:29:14.303633  342291 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1205 21:29:14.304788  342291 api_server.go:141] control plane version: v1.31.2
	I1205 21:29:14.304814  342291 api_server.go:131] duration metric: took 6.86096ms to wait for apiserver health ...
	I1205 21:29:14.304823  342291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:29:14.485746  342291 system_pods.go:59] 8 kube-system pods found
	I1205 21:29:14.485781  342291 system_pods.go:61] "coredns-7c65d6cfc9-6w5gs" [81c4cdd2-0091-49da-a8b3-dd618c2f87d5] Running
	I1205 21:29:14.485787  342291 system_pods.go:61] "etcd-kindnet-279893" [dc826fbf-9a3f-42cd-9fe0-e8a0bc7ffc39] Running
	I1205 21:29:14.485790  342291 system_pods.go:61] "kindnet-jrhgp" [b726cba5-e0b6-4787-9c47-e0d3b5a92ff5] Running
	I1205 21:29:14.485794  342291 system_pods.go:61] "kube-apiserver-kindnet-279893" [e3732181-c6bd-44ba-860e-441030f93961] Running
	I1205 21:29:14.485798  342291 system_pods.go:61] "kube-controller-manager-kindnet-279893" [25586fef-1f1e-416f-8b71-567855c665fb] Running
	I1205 21:29:14.485802  342291 system_pods.go:61] "kube-proxy-bpf8v" [d4b2a289-2449-43bc-92a3-1cd3c5b44693] Running
	I1205 21:29:14.485805  342291 system_pods.go:61] "kube-scheduler-kindnet-279893" [e9ceeccc-ad5b-4c63-8389-ac5bc94f20f6] Running
	I1205 21:29:14.485808  342291 system_pods.go:61] "storage-provisioner" [a15924b6-a2fd-4ecc-8e9a-10a3b15f8b54] Running
	I1205 21:29:14.485815  342291 system_pods.go:74] duration metric: took 180.985521ms to wait for pod list to return data ...
	I1205 21:29:14.485823  342291 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:29:14.682961  342291 default_sa.go:45] found service account: "default"
	I1205 21:29:14.682994  342291 default_sa.go:55] duration metric: took 197.164ms for default service account to be created ...
	I1205 21:29:14.683005  342291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:29:14.883824  342291 system_pods.go:86] 8 kube-system pods found
	I1205 21:29:14.883858  342291 system_pods.go:89] "coredns-7c65d6cfc9-6w5gs" [81c4cdd2-0091-49da-a8b3-dd618c2f87d5] Running
	I1205 21:29:14.883864  342291 system_pods.go:89] "etcd-kindnet-279893" [dc826fbf-9a3f-42cd-9fe0-e8a0bc7ffc39] Running
	I1205 21:29:14.883868  342291 system_pods.go:89] "kindnet-jrhgp" [b726cba5-e0b6-4787-9c47-e0d3b5a92ff5] Running
	I1205 21:29:14.883875  342291 system_pods.go:89] "kube-apiserver-kindnet-279893" [e3732181-c6bd-44ba-860e-441030f93961] Running
	I1205 21:29:14.883880  342291 system_pods.go:89] "kube-controller-manager-kindnet-279893" [25586fef-1f1e-416f-8b71-567855c665fb] Running
	I1205 21:29:14.883885  342291 system_pods.go:89] "kube-proxy-bpf8v" [d4b2a289-2449-43bc-92a3-1cd3c5b44693] Running
	I1205 21:29:14.883890  342291 system_pods.go:89] "kube-scheduler-kindnet-279893" [e9ceeccc-ad5b-4c63-8389-ac5bc94f20f6] Running
	I1205 21:29:14.883895  342291 system_pods.go:89] "storage-provisioner" [a15924b6-a2fd-4ecc-8e9a-10a3b15f8b54] Running
	I1205 21:29:14.883905  342291 system_pods.go:126] duration metric: took 200.892911ms to wait for k8s-apps to be running ...
	I1205 21:29:14.883918  342291 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:29:14.883981  342291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:29:14.905337  342291 system_svc.go:56] duration metric: took 21.407039ms WaitForService to wait for kubelet
	I1205 21:29:14.905371  342291 kubeadm.go:582] duration metric: took 17.416576405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:14.905392  342291 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:29:15.083144  342291 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:29:15.083175  342291 node_conditions.go:123] node cpu capacity is 2
	I1205 21:29:15.083187  342291 node_conditions.go:105] duration metric: took 177.789262ms to run NodePressure ...
	I1205 21:29:15.083200  342291 start.go:241] waiting for startup goroutines ...
	I1205 21:29:15.083208  342291 start.go:246] waiting for cluster config update ...
	I1205 21:29:15.083221  342291 start.go:255] writing updated cluster config ...
	I1205 21:29:15.083549  342291 ssh_runner.go:195] Run: rm -f paused
	I1205 21:29:15.140099  342291 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:29:15.142153  342291 out.go:177] * Done! kubectl is now configured to use "kindnet-279893" cluster and "default" namespace by default
	I1205 21:29:11.135922  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .Start
	I1205 21:29:11.137174  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring networks are active...
	I1205 21:29:11.137193  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network default is active
	I1205 21:29:11.137583  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network mk-kubernetes-upgrade-055769 is active
	I1205 21:29:11.138154  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Getting domain xml...
	I1205 21:29:11.138939  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Creating domain...
	I1205 21:29:13.175451  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Waiting to get IP...
	I1205 21:29:13.176326  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.176808  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.176867  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.176783  344370 retry.go:31] will retry after 242.423143ms: waiting for machine to come up
	I1205 21:29:13.421441  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.422097  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.422142  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.422084  344370 retry.go:31] will retry after 259.314158ms: waiting for machine to come up
	I1205 21:29:13.683390  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.684339  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.684374  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.684278  344370 retry.go:31] will retry after 367.110434ms: waiting for machine to come up
	I1205 21:29:14.053029  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.053632  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.053661  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.053553  344370 retry.go:31] will retry after 389.382342ms: waiting for machine to come up
	I1205 21:29:14.444074  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.444585  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.444616  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.444533  344370 retry.go:31] will retry after 468.986078ms: waiting for machine to come up
	I1205 21:29:14.915044  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.915640  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.915663  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.915607  344370 retry.go:31] will retry after 637.563189ms: waiting for machine to come up
	I1205 21:29:15.554622  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:15.555092  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:15.555120  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:15.555034  344370 retry.go:31] will retry after 746.63641ms: waiting for machine to come up
	I1205 21:29:13.096041  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:15.596211  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:13.439553  344606 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:29:13.439614  344606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:29:13.439624  344606 cache.go:56] Caching tarball of preloaded images
	I1205 21:29:13.439751  344606 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:29:13.439766  344606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:29:13.439861  344606 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/config.json ...
	I1205 21:29:13.439877  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/config.json: {Name:mkdd4eb8bb0e43c9d03e1afaa7e64a727b7bf7aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:13.440043  344606 start.go:360] acquireMachinesLock for calico-279893: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:29:16.303746  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:16.304213  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:16.304242  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:16.304149  344370 retry.go:31] will retry after 1.031493653s: waiting for machine to come up
	I1205 21:29:17.337498  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:17.338099  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:17.338134  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:17.338027  344370 retry.go:31] will retry after 1.804164493s: waiting for machine to come up
	I1205 21:29:19.144284  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:19.144779  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:19.144802  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:19.144751  344370 retry.go:31] will retry after 1.851829535s: waiting for machine to come up
	I1205 21:29:17.599796  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:20.096418  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:21.097335  342771 pod_ready.go:93] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:21.097368  342771 pod_ready.go:82] duration metric: took 10.008912289s for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:21.097382  342771 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:23.105519  342771 pod_ready.go:103] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:24.606197  342771 pod_ready.go:93] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:24.606223  342771 pod_ready.go:82] duration metric: took 3.508834127s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:24.606235  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.613479  342771 pod_ready.go:93] pod "kube-apiserver-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.613507  342771 pod_ready.go:82] duration metric: took 1.007263198s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.613530  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.619033  342771 pod_ready.go:93] pod "kube-controller-manager-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.619074  342771 pod_ready.go:82] duration metric: took 5.522028ms for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.619084  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.625431  342771 pod_ready.go:93] pod "kube-proxy-h8984" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.625457  342771 pod_ready.go:82] duration metric: took 6.366736ms for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.625467  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.630379  342771 pod_ready.go:93] pod "kube-scheduler-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.630405  342771 pod_ready.go:82] duration metric: took 4.931949ms for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.630414  342771 pod_ready.go:39] duration metric: took 14.547574761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:25.630434  342771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:29:25.643159  342771 ops.go:34] apiserver oom_adj: -16
	I1205 21:29:25.643207  342771 kubeadm.go:597] duration metric: took 32.568333804s to restartPrimaryControlPlane
	I1205 21:29:25.643232  342771 kubeadm.go:394] duration metric: took 33.048970815s to StartCluster
	I1205 21:29:25.643256  342771 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:25.643381  342771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:29:25.644743  342771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:25.645077  342771 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:29:25.645213  342771 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:29:25.645411  342771 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:25.646620  342771 out.go:177] * Verifying Kubernetes components...
	I1205 21:29:25.646620  342771 out.go:177] * Enabled addons: 
	I1205 21:29:20.998849  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:20.999306  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:20.999335  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:20.999245  344370 retry.go:31] will retry after 2.816150427s: waiting for machine to come up
	I1205 21:29:23.818788  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:23.819349  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:23.819384  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:23.819322  344370 retry.go:31] will retry after 2.432839332s: waiting for machine to come up
	I1205 21:29:25.648118  342771 addons.go:510] duration metric: took 2.923113ms for enable addons: enabled=[]
	I1205 21:29:25.648170  342771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:29:25.804448  342771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:29:25.820232  342771 node_ready.go:35] waiting up to 6m0s for node "pause-068873" to be "Ready" ...
	I1205 21:29:25.824092  342771 node_ready.go:49] node "pause-068873" has status "Ready":"True"
	I1205 21:29:25.824116  342771 node_ready.go:38] duration metric: took 3.83704ms for node "pause-068873" to be "Ready" ...
	I1205 21:29:25.824127  342771 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:25.829736  342771 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.209556  342771 pod_ready.go:93] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:26.209585  342771 pod_ready.go:82] duration metric: took 379.819812ms for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.209597  342771 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.602993  342771 pod_ready.go:93] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:26.603021  342771 pod_ready.go:82] duration metric: took 393.417802ms for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.603032  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.003490  342771 pod_ready.go:93] pod "kube-apiserver-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.003517  342771 pod_ready.go:82] duration metric: took 400.478382ms for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.003529  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.402100  342771 pod_ready.go:93] pod "kube-controller-manager-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.402130  342771 pod_ready.go:82] duration metric: took 398.594388ms for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.402144  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.802503  342771 pod_ready.go:93] pod "kube-proxy-h8984" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.802543  342771 pod_ready.go:82] duration metric: took 400.390733ms for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.802566  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:28.202435  342771 pod_ready.go:93] pod "kube-scheduler-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:28.202468  342771 pod_ready.go:82] duration metric: took 399.894419ms for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:28.202476  342771 pod_ready.go:39] duration metric: took 2.378340445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:28.202496  342771 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:29:28.202580  342771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:29:28.216817  342771 api_server.go:72] duration metric: took 2.571692582s to wait for apiserver process to appear ...
	I1205 21:29:28.216850  342771 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:29:28.216874  342771 api_server.go:253] Checking apiserver healthz at https://192.168.72.229:8443/healthz ...
	I1205 21:29:28.222417  342771 api_server.go:279] https://192.168.72.229:8443/healthz returned 200:
	ok
	I1205 21:29:28.223507  342771 api_server.go:141] control plane version: v1.31.2
	I1205 21:29:28.223532  342771 api_server.go:131] duration metric: took 6.674111ms to wait for apiserver health ...
	I1205 21:29:28.223543  342771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:29:28.404100  342771 system_pods.go:59] 6 kube-system pods found
	I1205 21:29:28.404137  342771 system_pods.go:61] "coredns-7c65d6cfc9-m89x5" [6ab80f4e-1848-432c-894c-213567ce8fe3] Running
	I1205 21:29:28.404142  342771 system_pods.go:61] "etcd-pause-068873" [ef399741-5a7b-476c-9466-c348716eed83] Running
	I1205 21:29:28.404146  342771 system_pods.go:61] "kube-apiserver-pause-068873" [955b5726-2210-4c67-ada5-795e2a94e3f9] Running
	I1205 21:29:28.404150  342771 system_pods.go:61] "kube-controller-manager-pause-068873" [a702f959-f593-4357-906e-430904da248d] Running
	I1205 21:29:28.404155  342771 system_pods.go:61] "kube-proxy-h8984" [49532404-faec-41e0-8b53-c750a91316a2] Running
	I1205 21:29:28.404158  342771 system_pods.go:61] "kube-scheduler-pause-068873" [dca4fc15-543a-4050-8084-0a9aa96aa4ea] Running
	I1205 21:29:28.404165  342771 system_pods.go:74] duration metric: took 180.613954ms to wait for pod list to return data ...
	I1205 21:29:28.404174  342771 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:29:28.602033  342771 default_sa.go:45] found service account: "default"
	I1205 21:29:28.602063  342771 default_sa.go:55] duration metric: took 197.882866ms for default service account to be created ...
	I1205 21:29:28.602074  342771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:29:28.805250  342771 system_pods.go:86] 6 kube-system pods found
	I1205 21:29:28.805286  342771 system_pods.go:89] "coredns-7c65d6cfc9-m89x5" [6ab80f4e-1848-432c-894c-213567ce8fe3] Running
	I1205 21:29:28.805292  342771 system_pods.go:89] "etcd-pause-068873" [ef399741-5a7b-476c-9466-c348716eed83] Running
	I1205 21:29:28.805296  342771 system_pods.go:89] "kube-apiserver-pause-068873" [955b5726-2210-4c67-ada5-795e2a94e3f9] Running
	I1205 21:29:28.805300  342771 system_pods.go:89] "kube-controller-manager-pause-068873" [a702f959-f593-4357-906e-430904da248d] Running
	I1205 21:29:28.805304  342771 system_pods.go:89] "kube-proxy-h8984" [49532404-faec-41e0-8b53-c750a91316a2] Running
	I1205 21:29:28.805308  342771 system_pods.go:89] "kube-scheduler-pause-068873" [dca4fc15-543a-4050-8084-0a9aa96aa4ea] Running
	I1205 21:29:28.805316  342771 system_pods.go:126] duration metric: took 203.235929ms to wait for k8s-apps to be running ...
	I1205 21:29:28.805323  342771 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:29:28.805378  342771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:29:28.820125  342771 system_svc.go:56] duration metric: took 14.78763ms WaitForService to wait for kubelet
	I1205 21:29:28.820161  342771 kubeadm.go:582] duration metric: took 3.17504431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:28.820181  342771 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:29:29.002446  342771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:29:29.002474  342771 node_conditions.go:123] node cpu capacity is 2
	I1205 21:29:29.002488  342771 node_conditions.go:105] duration metric: took 182.302209ms to run NodePressure ...
	I1205 21:29:29.002502  342771 start.go:241] waiting for startup goroutines ...
	I1205 21:29:29.002509  342771 start.go:246] waiting for cluster config update ...
	I1205 21:29:29.002515  342771 start.go:255] writing updated cluster config ...
	I1205 21:29:29.002814  342771 ssh_runner.go:195] Run: rm -f paused
	I1205 21:29:29.056772  342771 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:29:29.058746  342771 out.go:177] * Done! kubectl is now configured to use "pause-068873" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.722690246Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434169722668550,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae022946-d32c-419f-844d-2bc04a837922 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.723227603Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=697dfe0f-a51f-4f46-8518-d4cee4ed088b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.723292626Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=697dfe0f-a51f-4f46-8518-d4cee4ed088b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.723531861Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=697dfe0f-a51f-4f46-8518-d4cee4ed088b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.763527477Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e1f1533a-e9f1-4fc9-9dbc-8ec9efa15c0b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.763604337Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e1f1533a-e9f1-4fc9-9dbc-8ec9efa15c0b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.764827497Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de255faf-7726-4f21-b50b-ea16e6557fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.765336155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434169765309716,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de255faf-7726-4f21-b50b-ea16e6557fc8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.766123048Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=23eb36f0-8c63-4459-8157-a1c612d54c20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.766203459Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=23eb36f0-8c63-4459-8157-a1c612d54c20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.766465651Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=23eb36f0-8c63-4459-8157-a1c612d54c20 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.816500140Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3eb2815d-7046-4ac6-b45f-5a4d58ac6218 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.816593283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3eb2815d-7046-4ac6-b45f-5a4d58ac6218 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.818357726Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=db7c3bf6-8f07-49bb-8a82-fb17f5ae85f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.818951435Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434169818925092,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=db7c3bf6-8f07-49bb-8a82-fb17f5ae85f6 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.819701223Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5e343760-8a0f-4421-8350-d63e3d0a6ba8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.819782868Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5e343760-8a0f-4421-8350-d63e3d0a6ba8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.820079908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5e343760-8a0f-4421-8350-d63e3d0a6ba8 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.872330397Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1240ecec-2729-4830-b36d-c6b10c527393 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.872430987Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1240ecec-2729-4830-b36d-c6b10c527393 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.873998084Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=12aabfcb-82dd-421c-810d-fbff80a8892b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.874529302Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434169874504902,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=12aabfcb-82dd-421c-810d-fbff80a8892b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.875183597Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=179cfe4b-c803-46d2-91e1-03adf9944ecb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.875259438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=179cfe4b-c803-46d2-91e1-03adf9944ecb name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:29 pause-068873 crio[2298]: time="2024-12-05 21:29:29.875614165Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=179cfe4b-c803-46d2-91e1-03adf9944ecb name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	900d051beb4b3       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   23 seconds ago       Running             kube-apiserver            2                   8f6a566611e8e       kube-apiserver-pause-068873
	3185656b82ff3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   23 seconds ago       Running             kube-scheduler            2                   a9a7b1e38000a       kube-scheduler-pause-068873
	6529c21096477       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   23 seconds ago       Running             kube-controller-manager   2                   b3e1dc6d7a1bd       kube-controller-manager-pause-068873
	88ff7b87acb7e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   23 seconds ago       Running             etcd                      2                   742c682aba1f3       etcd-pause-068873
	a264dbde443d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   36 seconds ago       Running             coredns                   1                   235cd1baa6837       coredns-7c65d6cfc9-m89x5
	b0c79898bdc6f       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   37 seconds ago       Running             kube-proxy                1                   78eda860655e0       kube-proxy-h8984
	9e5d1d60f086d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   37 seconds ago       Exited              etcd                      1                   742c682aba1f3       etcd-pause-068873
	dea54f3f077c4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   37 seconds ago       Exited              kube-apiserver            1                   8f6a566611e8e       kube-apiserver-pause-068873
	85815ac26a7ae       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   37 seconds ago       Exited              kube-scheduler            1                   a9a7b1e38000a       kube-scheduler-pause-068873
	5dc087a55ad29       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   37 seconds ago       Exited              kube-controller-manager   1                   b3e1dc6d7a1bd       kube-controller-manager-pause-068873
	47b85bb23ec9b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   7ffff40b1fb85       coredns-7c65d6cfc9-m89x5
	3892d8954e0a3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Exited              kube-proxy                0                   de24ba68f22e4       kube-proxy-h8984
	
	
	==> coredns [47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51231 - 52195 "HINFO IN 9055947614407638998.2670647647563321197. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024052667s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52838->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2014967982]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 21:28:53.955) (total time: 10548ms):
	Trace[2014967982]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer 10548ms (21:29:04.503)
	Trace[2014967982]: [10.548089105s] [10.548089105s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[433478169]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 21:28:53.953) (total time: 10550ms):
	Trace[433478169]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer 10550ms (21:29:04.503)
	Trace[433478169]: [10.550556975s] [10.550556975s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-068873
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-068873
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=pause-068873
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_28_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:27:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-068873
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:29:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:28:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.229
	  Hostname:    pause-068873
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a5c3137534249e6a0c2bf6edb14181c
	  System UUID:                7a5c3137-5342-49e6-a0c2-bf6edb14181c
	  Boot ID:                    ebff1754-a929-4f7a-845b-5c160559166e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-m89x5                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     84s
	  kube-system                 etcd-pause-068873                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         89s
	  kube-system                 kube-apiserver-pause-068873             250m (12%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-pause-068873    200m (10%)    0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-h8984                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-pause-068873             100m (5%)     0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 82s                kube-proxy       
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     89s                kubelet          Node pause-068873 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  89s                kubelet          Node pause-068873 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    89s                kubelet          Node pause-068873 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                89s                kubelet          Node pause-068873 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  89s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           85s                node-controller  Node pause-068873 event: Registered Node pause-068873 in Controller
	  Normal  Starting                 24s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  24s (x8 over 24s)  kubelet          Node pause-068873 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24s (x8 over 24s)  kubelet          Node pause-068873 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24s (x7 over 24s)  kubelet          Node pause-068873 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18s                node-controller  Node pause-068873 event: Registered Node pause-068873 in Controller
	
	
	==> dmesg <==
	[  +0.056850] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.192166] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.116968] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.297655] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.206175] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.777100] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.067964] kauditd_printk_skb: 158 callbacks suppressed
	[Dec 5 21:28] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[  +0.074308] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.343707] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.125897] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.581825] kauditd_printk_skb: 103 callbacks suppressed
	[ +30.771423] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.131526] systemd-fstab-generator[2234]: Ignoring "noauto" option for root device
	[  +0.161634] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.137428] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +0.276580] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +8.309436] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
	[  +0.074645] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 5 21:29] kauditd_printk_skb: 86 callbacks suppressed
	[  +4.693453] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.340258] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.043578] kauditd_printk_skb: 16 callbacks suppressed
	[ +12.491115] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	
	
	==> etcd [88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570] <==
	{"level":"info","ts":"2024-12-05T21:29:06.884179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:06.894933Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T21:29:06.892178Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"82811d29f3e953c3","local-member-id":"c8f87299e6c07be2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:29:06.896172Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.229:2380"}
	{"level":"info","ts":"2024-12-05T21:29:06.910290Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.229:2380"}
	{"level":"info","ts":"2024-12-05T21:29:06.910482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:29:06.910805Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c8f87299e6c07be2","initial-advertise-peer-urls":["https://192.168.72.229:2380"],"listen-peer-urls":["https://192.168.72.229:2380"],"advertise-client-urls":["https://192.168.72.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T21:29:06.910979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T21:29:07.919185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 received MsgPreVoteResp from c8f87299e6c07be2 at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 received MsgVoteResp from c8f87299e6c07be2 at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c8f87299e6c07be2 elected leader c8f87299e6c07be2 at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.924976Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c8f87299e6c07be2","local-member-attributes":"{Name:pause-068873 ClientURLs:[https://192.168.72.229:2379]}","request-path":"/0/members/c8f87299e6c07be2/attributes","cluster-id":"82811d29f3e953c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:29:07.925232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:29:07.926185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:07.926938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:29:07.927087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:29:07.927293Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:29:07.927327Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:29:07.927801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:07.928575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.229:2379"}
	{"level":"info","ts":"2024-12-05T21:29:10.267220Z","caller":"traceutil/trace.go:171","msg":"trace[305759639] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"108.379247ms","start":"2024-12-05T21:29:10.158811Z","end":"2024-12-05T21:29:10.267190Z","steps":["trace[305759639] 'process raft request'  (duration: 103.740479ms)"],"step_count":1}
	
	
	==> etcd [9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8] <==
	
	
	==> kernel <==
	 21:29:30 up 2 min,  0 users,  load average: 0.46, 0.21, 0.08
	Linux pause-068873 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952] <==
	I1205 21:29:09.453383       1 aggregator.go:171] initial CRD sync complete...
	I1205 21:29:09.453523       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 21:29:09.453576       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 21:29:09.454204       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 21:29:09.491095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 21:29:09.510629       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 21:29:09.511872       1 policy_source.go:224] refreshing policies
	I1205 21:29:09.552651       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 21:29:09.552697       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 21:29:09.553207       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 21:29:09.553681       1 cache.go:39] Caches are synced for autoregister controller
	I1205 21:29:09.553917       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 21:29:09.554007       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 21:29:09.555162       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 21:29:09.565789       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 21:29:09.571515       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1205 21:29:09.587332       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 21:29:10.348176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 21:29:10.895763       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 21:29:10.916842       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 21:29:10.964804       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 21:29:11.008839       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 21:29:11.022309       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 21:29:13.116495       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 21:29:13.164582       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27] <==
	I1205 21:28:53.306207       1 options.go:228] external host was not specified, using 192.168.72.229
	I1205 21:28:53.312317       1 server.go:142] Version: v1.31.2
	I1205 21:28:53.312377       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:54.143140       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W1205 21:28:54.143871       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:54.143955       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1205 21:28:54.154095       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 21:28:54.159149       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1205 21:28:54.159234       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1205 21:28:54.159497       1 instance.go:232] Using reconciler: lease
	W1205 21:28:54.162592       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.144486       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.144554       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.163970       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.580452       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.747206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.918496       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:58.979782       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:59.649343       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:59.734875       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:29:03.197852       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860] <==
	I1205 21:28:53.803703       1 serving.go:386] Generated self-signed cert in-memory
	I1205 21:28:54.611369       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 21:28:54.611409       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:54.612745       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 21:28:54.612939       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 21:28:54.612947       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 21:28:54.612962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375] <==
	I1205 21:29:12.813077       1 shared_informer.go:320] Caches are synced for endpoint
	I1205 21:29:12.819506       1 shared_informer.go:320] Caches are synced for PV protection
	I1205 21:29:12.822844       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1205 21:29:12.825398       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1205 21:29:12.830250       1 shared_informer.go:320] Caches are synced for TTL
	I1205 21:29:12.830323       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1205 21:29:12.830632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67.983014ms"
	I1205 21:29:12.831236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="110.233µs"
	I1205 21:29:12.836003       1 shared_informer.go:320] Caches are synced for taint
	I1205 21:29:12.836159       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 21:29:12.836303       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-068873"
	I1205 21:29:12.836374       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 21:29:12.839319       1 shared_informer.go:320] Caches are synced for GC
	I1205 21:29:12.842877       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 21:29:12.890315       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 21:29:12.894494       1 shared_informer.go:320] Caches are synced for deployment
	I1205 21:29:12.918649       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 21:29:12.947632       1 shared_informer.go:320] Caches are synced for disruption
	I1205 21:29:13.000319       1 shared_informer.go:320] Caches are synced for attach detach
	I1205 21:29:13.016158       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 21:29:13.447210       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 21:29:13.463196       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 21:29:13.463225       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 21:29:20.725689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.105952ms"
	I1205 21:29:20.725849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.034µs"
	
	
	==> kube-proxy [3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:28:07.474809       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:28:07.514363       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.229"]
	E1205 21:28:07.514608       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:28:07.546295       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:28:07.546343       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:28:07.546369       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:28:07.550378       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:28:07.551352       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:28:07.551381       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:07.553624       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:28:07.554123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:28:07.554190       1 config.go:199] "Starting service config controller"
	I1205 21:28:07.554209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:28:07.555412       1 config.go:328] "Starting node config controller"
	I1205 21:28:07.555447       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:28:07.654675       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:28:07.654747       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:28:07.656093       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d] <==
	 >
	E1205 21:28:54.309209       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:29:04.502355       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-068873\": dial tcp 192.168.72.229:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.229:38798->192.168.72.229:8443: read: connection reset by peer"
	E1205 21:29:05.646377       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-068873\": dial tcp 192.168.72.229:8443: connect: connection refused"
	I1205 21:29:09.501314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.229"]
	E1205 21:29:09.501536       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:29:09.583253       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:29:09.583373       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:29:09.583412       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:29:09.586444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:29:09.586702       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:29:09.586726       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:29:09.588265       1 config.go:199] "Starting service config controller"
	I1205 21:29:09.588299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:29:09.588330       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:29:09.588334       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:29:09.588770       1 config.go:328] "Starting node config controller"
	I1205 21:29:09.588795       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:29:09.688422       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:29:09.688488       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:29:09.689139       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c] <==
	I1205 21:29:07.540492       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:29:09.480530       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 21:29:09.480716       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 21:29:09.480825       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:29:09.480866       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:29:09.525003       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:29:09.528104       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:29:09.530803       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 21:29:09.530888       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:29:09.531597       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 21:29:09.531700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 21:29:09.631619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d] <==
	I1205 21:28:53.946204       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:29:04.501301       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.72.229:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.72.229:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.229:38782->192.168.72.229:8443: read: connection reset by peer
	W1205 21:29:04.501339       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:29:04.501348       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:29:04.514848       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:29:04.514892       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1205 21:29:04.514912       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1205 21:29:04.517001       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1205 21:29:04.517260       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1205 21:29:04.517354       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.238785    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50860d75af521a7befd511acd7d7a982-k8s-certs\") pod \"kube-controller-manager-pause-068873\" (UID: \"50860d75af521a7befd511acd7d7a982\") " pod="kube-system/kube-controller-manager-pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.424467    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.425541    3190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.229:8443: connect: connection refused" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.469194    3190 scope.go:117] "RemoveContainer" containerID="9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.471671    3190 scope.go:117] "RemoveContainer" containerID="dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.472099    3190 scope.go:117] "RemoveContainer" containerID="5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.475154    3190 scope.go:117] "RemoveContainer" containerID="85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.636597    3190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-068873?timeout=10s\": dial tcp 192.168.72.229:8443: connect: connection refused" interval="800ms"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.827306    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.828413    3190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.229:8443: connect: connection refused" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: W1205 21:29:06.890704    3190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.72.229:8443: connect: connection refused
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.890837    3190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.72.229:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 21:29:07 pause-068873 kubelet[3190]: I1205 21:29:07.630080    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.598948    3190 kubelet_node_status.go:111] "Node was previously registered" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.599111    3190 kubelet_node_status.go:75] "Successfully registered node" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.599142    3190 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.600062    3190 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.010633    3190 apiserver.go:52] "Watching apiserver"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.033396    3190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.045290    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49532404-faec-41e0-8b53-c750a91316a2-lib-modules\") pod \"kube-proxy-h8984\" (UID: \"49532404-faec-41e0-8b53-c750a91316a2\") " pod="kube-system/kube-proxy-h8984"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.045408    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49532404-faec-41e0-8b53-c750a91316a2-xtables-lock\") pod \"kube-proxy-h8984\" (UID: \"49532404-faec-41e0-8b53-c750a91316a2\") " pod="kube-system/kube-proxy-h8984"
	Dec 05 21:29:16 pause-068873 kubelet[3190]: E1205 21:29:16.127532    3190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434156126743726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:16 pause-068873 kubelet[3190]: E1205 21:29:16.127961    3190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434156126743726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:26 pause-068873 kubelet[3190]: E1205 21:29:26.132162    3190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434166129527911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:26 pause-068873 kubelet[3190]: E1205 21:29:26.132230    3190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434166129527911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-068873 -n pause-068873
helpers_test.go:261: (dbg) Run:  kubectl --context pause-068873 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-068873 -n pause-068873
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-068873 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-068873 logs -n 25: (1.654615439s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /var/lib/kubelet/config.yaml                         |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status docker --all --full                           |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat docker --no-pager                                |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/docker/daemon.json                              |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo docker                           | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | system info                                          |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status cri-docker --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat cri-docker --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	| ssh     | -p auto-279893 sudo                                  | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cri-dockerd --version                                |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | status containerd --all --full                       |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat containerd --no-pager                            |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /lib/systemd/system/containerd.service               |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo cat                              | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/containerd/config.toml                          |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo containerd                       | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | config dump                                          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-055769                         | kubernetes-upgrade-055769 | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --memory=2200                                        |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                                    |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                                   |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | status crio --all --full                             |                           |         |         |                     |                     |
	|         | --no-pager                                           |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo systemctl                        | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | cat crio --no-pager                                  |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo find                             | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | /etc/crio -type f -exec sh -c                        |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |                           |         |         |                     |                     |
	| ssh     | -p auto-279893 sudo crio                             | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | config                                               |                           |         |         |                     |                     |
	| delete  | -p auto-279893                                       | auto-279893               | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	| start   | -p calico-279893 --memory=3072                       | calico-279893             | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC |                     |
	|         | --alsologtostderr --wait=true                        |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |                           |         |         |                     |                     |
	|         | --cni=calico --driver=kvm2                           |                           |         |         |                     |                     |
	|         | --container-runtime=crio                             |                           |         |         |                     |                     |
	| ssh     | -p kindnet-279893 pgrep -a                           | kindnet-279893            | jenkins | v1.34.0 | 05 Dec 24 21:29 UTC | 05 Dec 24 21:29 UTC |
	|         | kubelet                                              |                           |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:29:13
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:29:13.346989  344606 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:29:13.347166  344606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:13.347184  344606 out.go:358] Setting ErrFile to fd 2...
	I1205 21:29:13.347192  344606 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:29:13.347480  344606 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:29:13.348258  344606 out.go:352] Setting JSON to false
	I1205 21:29:13.349478  344606 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15101,"bootTime":1733419052,"procs":308,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:29:13.349611  344606 start.go:139] virtualization: kvm guest
	I1205 21:29:13.351815  344606 out.go:177] * [calico-279893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:29:13.353637  344606 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:29:13.353680  344606 notify.go:220] Checking for updates...
	I1205 21:29:13.355200  344606 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:29:13.356646  344606 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:29:13.357998  344606 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:29:13.359512  344606 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:29:13.360868  344606 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:29:13.363007  344606 config.go:182] Loaded profile config "kindnet-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363206  344606 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363415  344606 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:13.363597  344606 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:29:13.410235  344606 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:29:13.411520  344606 start.go:297] selected driver: kvm2
	I1205 21:29:13.411545  344606 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:29:13.411566  344606 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:29:13.412423  344606 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:13.412529  344606 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:29:13.432563  344606 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:29:13.432623  344606 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 21:29:13.432840  344606 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:13.432875  344606 cni.go:84] Creating CNI manager for "calico"
	I1205 21:29:13.432880  344606 start_flags.go:319] Found "Calico" CNI - setting NetworkPlugin=cni
	I1205 21:29:13.432924  344606 start.go:340] cluster config:
	{Name:calico-279893 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:29:13.433031  344606 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:29:13.438298  344606 out.go:177] * Starting "calico-279893" primary control-plane node in "calico-279893" cluster
	I1205 21:29:11.282348  342291 node_ready.go:53] node "kindnet-279893" has status "Ready":"False"
	I1205 21:29:12.286344  342291 node_ready.go:49] node "kindnet-279893" has status "Ready":"True"
	I1205 21:29:12.286375  342291 node_ready.go:38] duration metric: took 14.508260441s for node "kindnet-279893" to be "Ready" ...
	I1205 21:29:12.286389  342291 pod_ready.go:36] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:12.306219  342291 pod_ready.go:79] waiting up to 15m0s for pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.814329  342291 pod_ready.go:93] pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.814370  342291 pod_ready.go:82] duration metric: took 1.508109102s for pod "coredns-7c65d6cfc9-6w5gs" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.814386  342291 pod_ready.go:79] waiting up to 15m0s for pod "etcd-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.820608  342291 pod_ready.go:93] pod "etcd-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.820648  342291 pod_ready.go:82] duration metric: took 6.251844ms for pod "etcd-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.820667  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-apiserver-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.826894  342291 pod_ready.go:93] pod "kube-apiserver-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.826931  342291 pod_ready.go:82] duration metric: took 6.251271ms for pod "kube-apiserver-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.826948  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.833177  342291 pod_ready.go:93] pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.833210  342291 pod_ready.go:82] duration metric: took 6.253076ms for pod "kube-controller-manager-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.833223  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-proxy-bpf8v" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.883951  342291 pod_ready.go:93] pod "kube-proxy-bpf8v" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:13.883984  342291 pod_ready.go:82] duration metric: took 50.752128ms for pod "kube-proxy-bpf8v" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:13.884001  342291 pod_ready.go:79] waiting up to 15m0s for pod "kube-scheduler-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:14.281848  342291 pod_ready.go:93] pod "kube-scheduler-kindnet-279893" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:14.281876  342291 pod_ready.go:82] duration metric: took 397.868078ms for pod "kube-scheduler-kindnet-279893" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:14.281894  342291 pod_ready.go:39] duration metric: took 1.995484315s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:14.281941  342291 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:29:14.282005  342291 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:29:14.297888  342291 api_server.go:72] duration metric: took 16.809082703s to wait for apiserver process to appear ...
	I1205 21:29:14.297945  342291 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:29:14.297976  342291 api_server.go:253] Checking apiserver healthz at https://192.168.61.132:8443/healthz ...
	I1205 21:29:14.303633  342291 api_server.go:279] https://192.168.61.132:8443/healthz returned 200:
	ok
	I1205 21:29:14.304788  342291 api_server.go:141] control plane version: v1.31.2
	I1205 21:29:14.304814  342291 api_server.go:131] duration metric: took 6.86096ms to wait for apiserver health ...
	I1205 21:29:14.304823  342291 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:29:14.485746  342291 system_pods.go:59] 8 kube-system pods found
	I1205 21:29:14.485781  342291 system_pods.go:61] "coredns-7c65d6cfc9-6w5gs" [81c4cdd2-0091-49da-a8b3-dd618c2f87d5] Running
	I1205 21:29:14.485787  342291 system_pods.go:61] "etcd-kindnet-279893" [dc826fbf-9a3f-42cd-9fe0-e8a0bc7ffc39] Running
	I1205 21:29:14.485790  342291 system_pods.go:61] "kindnet-jrhgp" [b726cba5-e0b6-4787-9c47-e0d3b5a92ff5] Running
	I1205 21:29:14.485794  342291 system_pods.go:61] "kube-apiserver-kindnet-279893" [e3732181-c6bd-44ba-860e-441030f93961] Running
	I1205 21:29:14.485798  342291 system_pods.go:61] "kube-controller-manager-kindnet-279893" [25586fef-1f1e-416f-8b71-567855c665fb] Running
	I1205 21:29:14.485802  342291 system_pods.go:61] "kube-proxy-bpf8v" [d4b2a289-2449-43bc-92a3-1cd3c5b44693] Running
	I1205 21:29:14.485805  342291 system_pods.go:61] "kube-scheduler-kindnet-279893" [e9ceeccc-ad5b-4c63-8389-ac5bc94f20f6] Running
	I1205 21:29:14.485808  342291 system_pods.go:61] "storage-provisioner" [a15924b6-a2fd-4ecc-8e9a-10a3b15f8b54] Running
	I1205 21:29:14.485815  342291 system_pods.go:74] duration metric: took 180.985521ms to wait for pod list to return data ...
	I1205 21:29:14.485823  342291 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:29:14.682961  342291 default_sa.go:45] found service account: "default"
	I1205 21:29:14.682994  342291 default_sa.go:55] duration metric: took 197.164ms for default service account to be created ...
	I1205 21:29:14.683005  342291 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:29:14.883824  342291 system_pods.go:86] 8 kube-system pods found
	I1205 21:29:14.883858  342291 system_pods.go:89] "coredns-7c65d6cfc9-6w5gs" [81c4cdd2-0091-49da-a8b3-dd618c2f87d5] Running
	I1205 21:29:14.883864  342291 system_pods.go:89] "etcd-kindnet-279893" [dc826fbf-9a3f-42cd-9fe0-e8a0bc7ffc39] Running
	I1205 21:29:14.883868  342291 system_pods.go:89] "kindnet-jrhgp" [b726cba5-e0b6-4787-9c47-e0d3b5a92ff5] Running
	I1205 21:29:14.883875  342291 system_pods.go:89] "kube-apiserver-kindnet-279893" [e3732181-c6bd-44ba-860e-441030f93961] Running
	I1205 21:29:14.883880  342291 system_pods.go:89] "kube-controller-manager-kindnet-279893" [25586fef-1f1e-416f-8b71-567855c665fb] Running
	I1205 21:29:14.883885  342291 system_pods.go:89] "kube-proxy-bpf8v" [d4b2a289-2449-43bc-92a3-1cd3c5b44693] Running
	I1205 21:29:14.883890  342291 system_pods.go:89] "kube-scheduler-kindnet-279893" [e9ceeccc-ad5b-4c63-8389-ac5bc94f20f6] Running
	I1205 21:29:14.883895  342291 system_pods.go:89] "storage-provisioner" [a15924b6-a2fd-4ecc-8e9a-10a3b15f8b54] Running
	I1205 21:29:14.883905  342291 system_pods.go:126] duration metric: took 200.892911ms to wait for k8s-apps to be running ...
	I1205 21:29:14.883918  342291 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:29:14.883981  342291 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:29:14.905337  342291 system_svc.go:56] duration metric: took 21.407039ms WaitForService to wait for kubelet
	I1205 21:29:14.905371  342291 kubeadm.go:582] duration metric: took 17.416576405s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:14.905392  342291 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:29:15.083144  342291 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:29:15.083175  342291 node_conditions.go:123] node cpu capacity is 2
	I1205 21:29:15.083187  342291 node_conditions.go:105] duration metric: took 177.789262ms to run NodePressure ...
	I1205 21:29:15.083200  342291 start.go:241] waiting for startup goroutines ...
	I1205 21:29:15.083208  342291 start.go:246] waiting for cluster config update ...
	I1205 21:29:15.083221  342291 start.go:255] writing updated cluster config ...
	I1205 21:29:15.083549  342291 ssh_runner.go:195] Run: rm -f paused
	I1205 21:29:15.140099  342291 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:29:15.142153  342291 out.go:177] * Done! kubectl is now configured to use "kindnet-279893" cluster and "default" namespace by default
	I1205 21:29:11.135922  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .Start
	I1205 21:29:11.137174  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring networks are active...
	I1205 21:29:11.137193  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network default is active
	I1205 21:29:11.137583  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Ensuring network mk-kubernetes-upgrade-055769 is active
	I1205 21:29:11.138154  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Getting domain xml...
	I1205 21:29:11.138939  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Creating domain...
	I1205 21:29:13.175451  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Waiting to get IP...
	I1205 21:29:13.176326  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.176808  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.176867  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.176783  344370 retry.go:31] will retry after 242.423143ms: waiting for machine to come up
	I1205 21:29:13.421441  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.422097  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.422142  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.422084  344370 retry.go:31] will retry after 259.314158ms: waiting for machine to come up
	I1205 21:29:13.683390  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.684339  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:13.684374  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:13.684278  344370 retry.go:31] will retry after 367.110434ms: waiting for machine to come up
	I1205 21:29:14.053029  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.053632  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.053661  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.053553  344370 retry.go:31] will retry after 389.382342ms: waiting for machine to come up
	I1205 21:29:14.444074  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.444585  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.444616  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.444533  344370 retry.go:31] will retry after 468.986078ms: waiting for machine to come up
	I1205 21:29:14.915044  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.915640  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:14.915663  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:14.915607  344370 retry.go:31] will retry after 637.563189ms: waiting for machine to come up
	I1205 21:29:15.554622  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:15.555092  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:15.555120  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:15.555034  344370 retry.go:31] will retry after 746.63641ms: waiting for machine to come up
	I1205 21:29:13.096041  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:15.596211  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:13.439553  344606 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:29:13.439614  344606 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 21:29:13.439624  344606 cache.go:56] Caching tarball of preloaded images
	I1205 21:29:13.439751  344606 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:29:13.439766  344606 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 21:29:13.439861  344606 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/config.json ...
	I1205 21:29:13.439877  344606 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/config.json: {Name:mkdd4eb8bb0e43c9d03e1afaa7e64a727b7bf7aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:13.440043  344606 start.go:360] acquireMachinesLock for calico-279893: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:29:16.303746  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:16.304213  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:16.304242  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:16.304149  344370 retry.go:31] will retry after 1.031493653s: waiting for machine to come up
	I1205 21:29:17.337498  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:17.338099  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:17.338134  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:17.338027  344370 retry.go:31] will retry after 1.804164493s: waiting for machine to come up
	I1205 21:29:19.144284  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:19.144779  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:19.144802  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:19.144751  344370 retry.go:31] will retry after 1.851829535s: waiting for machine to come up
	I1205 21:29:17.599796  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:20.096418  342771 pod_ready.go:103] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:21.097335  342771 pod_ready.go:93] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:21.097368  342771 pod_ready.go:82] duration metric: took 10.008912289s for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:21.097382  342771 pod_ready.go:79] waiting up to 4m0s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:23.105519  342771 pod_ready.go:103] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"False"
	I1205 21:29:24.606197  342771 pod_ready.go:93] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:24.606223  342771 pod_ready.go:82] duration metric: took 3.508834127s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:24.606235  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.613479  342771 pod_ready.go:93] pod "kube-apiserver-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.613507  342771 pod_ready.go:82] duration metric: took 1.007263198s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.613530  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.619033  342771 pod_ready.go:93] pod "kube-controller-manager-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.619074  342771 pod_ready.go:82] duration metric: took 5.522028ms for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.619084  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.625431  342771 pod_ready.go:93] pod "kube-proxy-h8984" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.625457  342771 pod_ready.go:82] duration metric: took 6.366736ms for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.625467  342771 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.630379  342771 pod_ready.go:93] pod "kube-scheduler-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:25.630405  342771 pod_ready.go:82] duration metric: took 4.931949ms for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:25.630414  342771 pod_ready.go:39] duration metric: took 14.547574761s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:25.630434  342771 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:29:25.643159  342771 ops.go:34] apiserver oom_adj: -16
	I1205 21:29:25.643207  342771 kubeadm.go:597] duration metric: took 32.568333804s to restartPrimaryControlPlane
	I1205 21:29:25.643232  342771 kubeadm.go:394] duration metric: took 33.048970815s to StartCluster
	I1205 21:29:25.643256  342771 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:25.643381  342771 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:29:25.644743  342771 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:29:25.645077  342771 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.229 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:29:25.645213  342771 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:29:25.645411  342771 config.go:182] Loaded profile config "pause-068873": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:25.646620  342771 out.go:177] * Verifying Kubernetes components...
	I1205 21:29:25.646620  342771 out.go:177] * Enabled addons: 
	I1205 21:29:20.998849  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:20.999306  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:20.999335  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:20.999245  344370 retry.go:31] will retry after 2.816150427s: waiting for machine to come up
	I1205 21:29:23.818788  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:23.819349  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:23.819384  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:23.819322  344370 retry.go:31] will retry after 2.432839332s: waiting for machine to come up
	I1205 21:29:25.648118  342771 addons.go:510] duration metric: took 2.923113ms for enable addons: enabled=[]
	I1205 21:29:25.648170  342771 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:29:25.804448  342771 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:29:25.820232  342771 node_ready.go:35] waiting up to 6m0s for node "pause-068873" to be "Ready" ...
	I1205 21:29:25.824092  342771 node_ready.go:49] node "pause-068873" has status "Ready":"True"
	I1205 21:29:25.824116  342771 node_ready.go:38] duration metric: took 3.83704ms for node "pause-068873" to be "Ready" ...
	I1205 21:29:25.824127  342771 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:25.829736  342771 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.209556  342771 pod_ready.go:93] pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:26.209585  342771 pod_ready.go:82] duration metric: took 379.819812ms for pod "coredns-7c65d6cfc9-m89x5" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.209597  342771 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.602993  342771 pod_ready.go:93] pod "etcd-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:26.603021  342771 pod_ready.go:82] duration metric: took 393.417802ms for pod "etcd-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:26.603032  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.003490  342771 pod_ready.go:93] pod "kube-apiserver-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.003517  342771 pod_ready.go:82] duration metric: took 400.478382ms for pod "kube-apiserver-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.003529  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.402100  342771 pod_ready.go:93] pod "kube-controller-manager-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.402130  342771 pod_ready.go:82] duration metric: took 398.594388ms for pod "kube-controller-manager-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.402144  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.802503  342771 pod_ready.go:93] pod "kube-proxy-h8984" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:27.802543  342771 pod_ready.go:82] duration metric: took 400.390733ms for pod "kube-proxy-h8984" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:27.802566  342771 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:28.202435  342771 pod_ready.go:93] pod "kube-scheduler-pause-068873" in "kube-system" namespace has status "Ready":"True"
	I1205 21:29:28.202468  342771 pod_ready.go:82] duration metric: took 399.894419ms for pod "kube-scheduler-pause-068873" in "kube-system" namespace to be "Ready" ...
	I1205 21:29:28.202476  342771 pod_ready.go:39] duration metric: took 2.378340445s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:29:28.202496  342771 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:29:28.202580  342771 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:29:28.216817  342771 api_server.go:72] duration metric: took 2.571692582s to wait for apiserver process to appear ...
	I1205 21:29:28.216850  342771 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:29:28.216874  342771 api_server.go:253] Checking apiserver healthz at https://192.168.72.229:8443/healthz ...
	I1205 21:29:28.222417  342771 api_server.go:279] https://192.168.72.229:8443/healthz returned 200:
	ok
	I1205 21:29:28.223507  342771 api_server.go:141] control plane version: v1.31.2
	I1205 21:29:28.223532  342771 api_server.go:131] duration metric: took 6.674111ms to wait for apiserver health ...
	I1205 21:29:28.223543  342771 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:29:28.404100  342771 system_pods.go:59] 6 kube-system pods found
	I1205 21:29:28.404137  342771 system_pods.go:61] "coredns-7c65d6cfc9-m89x5" [6ab80f4e-1848-432c-894c-213567ce8fe3] Running
	I1205 21:29:28.404142  342771 system_pods.go:61] "etcd-pause-068873" [ef399741-5a7b-476c-9466-c348716eed83] Running
	I1205 21:29:28.404146  342771 system_pods.go:61] "kube-apiserver-pause-068873" [955b5726-2210-4c67-ada5-795e2a94e3f9] Running
	I1205 21:29:28.404150  342771 system_pods.go:61] "kube-controller-manager-pause-068873" [a702f959-f593-4357-906e-430904da248d] Running
	I1205 21:29:28.404155  342771 system_pods.go:61] "kube-proxy-h8984" [49532404-faec-41e0-8b53-c750a91316a2] Running
	I1205 21:29:28.404158  342771 system_pods.go:61] "kube-scheduler-pause-068873" [dca4fc15-543a-4050-8084-0a9aa96aa4ea] Running
	I1205 21:29:28.404165  342771 system_pods.go:74] duration metric: took 180.613954ms to wait for pod list to return data ...
	I1205 21:29:28.404174  342771 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:29:28.602033  342771 default_sa.go:45] found service account: "default"
	I1205 21:29:28.602063  342771 default_sa.go:55] duration metric: took 197.882866ms for default service account to be created ...
	I1205 21:29:28.602074  342771 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:29:28.805250  342771 system_pods.go:86] 6 kube-system pods found
	I1205 21:29:28.805286  342771 system_pods.go:89] "coredns-7c65d6cfc9-m89x5" [6ab80f4e-1848-432c-894c-213567ce8fe3] Running
	I1205 21:29:28.805292  342771 system_pods.go:89] "etcd-pause-068873" [ef399741-5a7b-476c-9466-c348716eed83] Running
	I1205 21:29:28.805296  342771 system_pods.go:89] "kube-apiserver-pause-068873" [955b5726-2210-4c67-ada5-795e2a94e3f9] Running
	I1205 21:29:28.805300  342771 system_pods.go:89] "kube-controller-manager-pause-068873" [a702f959-f593-4357-906e-430904da248d] Running
	I1205 21:29:28.805304  342771 system_pods.go:89] "kube-proxy-h8984" [49532404-faec-41e0-8b53-c750a91316a2] Running
	I1205 21:29:28.805308  342771 system_pods.go:89] "kube-scheduler-pause-068873" [dca4fc15-543a-4050-8084-0a9aa96aa4ea] Running
	I1205 21:29:28.805316  342771 system_pods.go:126] duration metric: took 203.235929ms to wait for k8s-apps to be running ...
	I1205 21:29:28.805323  342771 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:29:28.805378  342771 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:29:28.820125  342771 system_svc.go:56] duration metric: took 14.78763ms WaitForService to wait for kubelet
	I1205 21:29:28.820161  342771 kubeadm.go:582] duration metric: took 3.17504431s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:29:28.820181  342771 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:29:29.002446  342771 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:29:29.002474  342771 node_conditions.go:123] node cpu capacity is 2
	I1205 21:29:29.002488  342771 node_conditions.go:105] duration metric: took 182.302209ms to run NodePressure ...
	I1205 21:29:29.002502  342771 start.go:241] waiting for startup goroutines ...
	I1205 21:29:29.002509  342771 start.go:246] waiting for cluster config update ...
	I1205 21:29:29.002515  342771 start.go:255] writing updated cluster config ...
	I1205 21:29:29.002814  342771 ssh_runner.go:195] Run: rm -f paused
	I1205 21:29:29.056772  342771 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:29:29.058746  342771 out.go:177] * Done! kubectl is now configured to use "pause-068873" cluster and "default" namespace by default
	I1205 21:29:30.739205  344606 start.go:364] duration metric: took 17.299111275s to acquireMachinesLock for "calico-279893"
	I1205 21:29:30.739299  344606 start.go:93] Provisioning new machine with config: &{Name:calico-279893 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:calico-279893 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:29:30.739448  344606 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 21:29:26.253728  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:26.254144  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | unable to find current IP address of domain kubernetes-upgrade-055769 in network mk-kubernetes-upgrade-055769
	I1205 21:29:26.254185  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | I1205 21:29:26.254058  344370 retry.go:31] will retry after 2.839745266s: waiting for machine to come up
	I1205 21:29:29.095324  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.096062  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Found IP for machine: 192.168.50.100
	I1205 21:29:29.096092  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has current primary IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.096100  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Reserving static IP address...
	I1205 21:29:29.096518  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "kubernetes-upgrade-055769", mac: "52:54:00:b3:72:db", ip: "192.168.50.100"} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.096542  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | skip adding static IP to network mk-kubernetes-upgrade-055769 - found existing host DHCP lease matching {name: "kubernetes-upgrade-055769", mac: "52:54:00:b3:72:db", ip: "192.168.50.100"}
	I1205 21:29:29.096559  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Reserved static IP address: 192.168.50.100
	I1205 21:29:29.096572  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Waiting for SSH to be available...
	I1205 21:29:29.096580  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Getting to WaitForSSH function...
	I1205 21:29:29.098724  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.099239  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.099275  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.099474  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Using SSH client type: external
	I1205 21:29:29.099495  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa (-rw-------)
	I1205 21:29:29.099529  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.100 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:29:29.099555  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | About to run SSH command:
	I1205 21:29:29.099590  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | exit 0
	I1205 21:29:29.227237  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | SSH cmd err, output: <nil>: 
	I1205 21:29:29.227829  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetConfigRaw
	I1205 21:29:29.228552  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:29:29.231649  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.231995  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.232046  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.232269  344334 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kubernetes-upgrade-055769/config.json ...
	I1205 21:29:29.232487  344334 machine.go:93] provisionDockerMachine start ...
	I1205 21:29:29.232507  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:29.232734  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:29.235377  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.235736  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.235768  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.235874  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:29.236062  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.236291  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.236472  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:29.236643  344334 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:29.236824  344334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:29:29.236835  344334 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:29:29.342446  344334 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:29:29.342490  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:29:29.342829  344334 buildroot.go:166] provisioning hostname "kubernetes-upgrade-055769"
	I1205 21:29:29.342869  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:29:29.343069  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:29.345995  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.346516  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.346558  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.346777  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:29.347003  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.347231  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.347427  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:29.347643  344334 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:29.347831  344334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:29:29.347846  344334 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-055769 && echo "kubernetes-upgrade-055769" | sudo tee /etc/hostname
	I1205 21:29:29.475623  344334 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-055769
	
	I1205 21:29:29.475658  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:29.478849  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.479253  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.479287  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.479554  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:29.479800  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.479977  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:29.480138  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:29.480355  344334 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:29.480580  344334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:29:29.480607  344334 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-055769' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-055769/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-055769' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:29:29.602416  344334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:29:29.602455  344334 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:29:29.602495  344334 buildroot.go:174] setting up certificates
	I1205 21:29:29.602511  344334 provision.go:84] configureAuth start
	I1205 21:29:29.602526  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetMachineName
	I1205 21:29:29.602848  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:29:29.605630  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.606091  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.606120  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.606328  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:29.608792  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.609196  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:29.609228  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:29.609353  344334 provision.go:143] copyHostCerts
	I1205 21:29:29.609416  344334 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:29:29.609437  344334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:29:29.609506  344334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:29:29.609624  344334 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:29:29.609633  344334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:29:29.609658  344334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:29:29.609728  344334 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:29:29.609735  344334 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:29:29.609756  344334 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:29:29.609817  344334 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-055769 san=[127.0.0.1 192.168.50.100 kubernetes-upgrade-055769 localhost minikube]
	I1205 21:29:30.067415  344334 provision.go:177] copyRemoteCerts
	I1205 21:29:30.067497  344334 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:29:30.067537  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.070434  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.070794  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.070823  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.071043  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.071382  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.071609  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.071802  344334 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:29:30.159229  344334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:29:30.191545  344334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1205 21:29:30.216813  344334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:29:30.243315  344334 provision.go:87] duration metric: took 640.79048ms to configureAuth
	I1205 21:29:30.243349  344334 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:29:30.243562  344334 config.go:182] Loaded profile config "kubernetes-upgrade-055769": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:29:30.243663  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.246813  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.247173  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.247201  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.247403  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.247608  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.247762  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.247886  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.248025  344334 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:30.248236  344334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:29:30.248259  344334 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:29:30.487919  344334 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:29:30.487960  344334 machine.go:96] duration metric: took 1.255458178s to provisionDockerMachine
	I1205 21:29:30.487977  344334 start.go:293] postStartSetup for "kubernetes-upgrade-055769" (driver="kvm2")
	I1205 21:29:30.487992  344334 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:29:30.488051  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:30.488448  344334 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:29:30.488493  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.491753  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.492170  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.492203  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.492429  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.492662  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.492859  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.493031  344334 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:29:30.577475  344334 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:29:30.582557  344334 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:29:30.582587  344334 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:29:30.582659  344334 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:29:30.582729  344334 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:29:30.582816  344334 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:29:30.592935  344334 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:29:30.621470  344334 start.go:296] duration metric: took 133.472263ms for postStartSetup
	I1205 21:29:30.621528  344334 fix.go:56] duration metric: took 19.513242627s for fixHost
	I1205 21:29:30.621579  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.624814  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.625210  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.625245  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.625397  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.625637  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.625833  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.626005  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.626160  344334 main.go:141] libmachine: Using SSH client type: native
	I1205 21:29:30.626353  344334 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.100 22 <nil> <nil>}
	I1205 21:29:30.626369  344334 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:29:30.739024  344334 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434170.708611355
	
	I1205 21:29:30.739055  344334 fix.go:216] guest clock: 1733434170.708611355
	I1205 21:29:30.739068  344334 fix.go:229] Guest: 2024-12-05 21:29:30.708611355 +0000 UTC Remote: 2024-12-05 21:29:30.621545494 +0000 UTC m=+19.697881363 (delta=87.065861ms)
	I1205 21:29:30.739101  344334 fix.go:200] guest clock delta is within tolerance: 87.065861ms
	I1205 21:29:30.739110  344334 start.go:83] releasing machines lock for "kubernetes-upgrade-055769", held for 19.630842588s
	I1205 21:29:30.739143  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:30.739492  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetIP
	I1205 21:29:30.743020  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.743486  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.743522  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.743778  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:30.744560  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:30.744722  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .DriverName
	I1205 21:29:30.744810  344334 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:29:30.744868  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.745137  344334 ssh_runner.go:195] Run: cat /version.json
	I1205 21:29:30.745160  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHHostname
	I1205 21:29:30.748223  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.748671  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.748699  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.748724  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.748932  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.749032  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b3:72:db", ip: ""} in network mk-kubernetes-upgrade-055769: {Iface:virbr2 ExpiryTime:2024-12-05 22:29:23 +0000 UTC Type:0 Mac:52:54:00:b3:72:db Iaid: IPaddr:192.168.50.100 Prefix:24 Hostname:kubernetes-upgrade-055769 Clientid:01:52:54:00:b3:72:db}
	I1205 21:29:30.749064  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) DBG | domain kubernetes-upgrade-055769 has defined IP address 192.168.50.100 and MAC address 52:54:00:b3:72:db in network mk-kubernetes-upgrade-055769
	I1205 21:29:30.749130  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.749239  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHPort
	I1205 21:29:30.749411  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.749413  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHKeyPath
	I1205 21:29:30.749625  344334 main.go:141] libmachine: (kubernetes-upgrade-055769) Calling .GetSSHUsername
	I1205 21:29:30.749621  344334 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:29:30.749762  344334 sshutil.go:53] new ssh client: &{IP:192.168.50.100 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/kubernetes-upgrade-055769/id_rsa Username:docker}
	I1205 21:29:30.864657  344334 ssh_runner.go:195] Run: systemctl --version
	I1205 21:29:30.887123  344334 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	
	
	==> CRI-O <==
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.960248083Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6db20f67-c2ba-451b-b3ec-cdf5d066847d name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.961561453Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=527d0fad-8e70-4705-b713-43807eb7ceac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.962197507Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434171962157027,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=527d0fad-8e70-4705-b713-43807eb7ceac name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.962935770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=428044b2-bb88-41c8-8c4b-93aa5ef809ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.963121216Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=428044b2-bb88-41c8-8c4b-93aa5ef809ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:31 pause-068873 crio[2298]: time="2024-12-05 21:29:31.963538514Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=428044b2-bb88-41c8-8c4b-93aa5ef809ec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.015310396Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ffa40b6f-8ba2-43e0-ba47-d517e4887af1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.015417176Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ffa40b6f-8ba2-43e0-ba47-d517e4887af1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.018287039Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=886c457f-3103-49cc-84ae-ba7c780d22f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.018846412Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434172018807053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=886c457f-3103-49cc-84ae-ba7c780d22f2 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.020307823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b2149c3b-525c-4177-b94b-15624d48fc93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.020396795Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b2149c3b-525c-4177-b94b-15624d48fc93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.021473912Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b2149c3b-525c-4177-b94b-15624d48fc93 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.066447950Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d56b4420-ca0f-4ded-a002-4987afb12b30 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.066713334Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-m89x5,Uid:6ab80f4e-1848-432c-894c-213567ce8fe3,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132581736340,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:28:06.448418898Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-068873,Uid:d370492f1798b8ef95dcf4d25f3b7822,Namespace:kube-system,
Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132376128222,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.229:8443,kubernetes.io/config.hash: d370492f1798b8ef95dcf4d25f3b7822,kubernetes.io/config.seen: 2024-12-05T21:28:00.988756612Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&PodSandboxMetadata{Name:kube-proxy-h8984,Uid:49532404-faec-41e0-8b53-c750a91316a2,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132361557248,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespac
e: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:28:05.980713195Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-068873,Uid:50860d75af521a7befd511acd7d7a982,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132356653427,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 50860d75af521a7befd511acd7d7a982,kubernetes.io/config.seen: 2024-12-05T21:28:00.988781710Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox
{Id:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&PodSandboxMetadata{Name:etcd-pause-068873,Uid:cf514bd8f9b21a7f8098a2aa18e8cb14,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132328855055,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.229:2379,kubernetes.io/config.hash: cf514bd8f9b21a7f8098a2aa18e8cb14,kubernetes.io/config.seen: 2024-12-05T21:28:00.988751651Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-068873,Uid:7a91f119bc5b1d26a4eebc093c893c7c,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733434132312704211,Lab
els:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7a91f119bc5b1d26a4eebc093c893c7c,kubernetes.io/config.seen: 2024-12-05T21:28:00.988782936Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=d56b4420-ca0f-4ded-a002-4987afb12b30 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.067822290Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81153f4b-9b63-4a70-b657-93ee7ff97a3f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.067912676Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81153f4b-9b63-4a70-b657-93ee7ff97a3f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.068111764Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81153f4b-9b63-4a70-b657-93ee7ff97a3f name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.082933411Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0d2bcc6f-87c5-4674-ac4b-06062d8bee74 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.083090898Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0d2bcc6f-87c5-4674-ac4b-06062d8bee74 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.084423557Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f14c6438-502b-47ce-bdfc-e23670a16e9b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.085003928Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434172084966743,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f14c6438-502b-47ce-bdfc-e23670a16e9b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.085887212Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a9990784-378e-45a1-8fe7-f026bd91e634 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.085946948Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a9990784-378e-45a1-8fe7-f026bd91e634 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:29:32 pause-068873 crio[2298]: time="2024-12-05 21:29:32.086232123Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434146547623998,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.term
inationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434146530948019,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath
: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434146506426131,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMe
ssagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434146490207927,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.co
ntainer.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3,PodSandboxId:235cd1baa68378fdc3b42b8732accd478cf9be77908f67bea7fc2f7f7b797864,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434133541697951,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"
},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d,PodSandboxId:78eda860655e08e7aec83c8fb60fe9ac39346afc651d9d2c6423eb94bbfdc0f4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434132839377072,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io
.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8,PodSandboxId:742c682aba1f3afa0eb6c9bd517699292e943661b79811caf15685dd300293a4,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_EXITED,CreatedAt:1733434132763087760,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf514bd8f9b21a7f8098a2aa18e8cb14,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernete
s.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27,PodSandboxId:8f6a566611e8e4e29d688480f14d61140e5f2c61b5763fd3081fcd42d68ae43e,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434132760325518,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d370492f1798b8ef95dcf4d25f3b7822,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restart
Count: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d,PodSandboxId:a9a7b1e38000ab90e23050564aa27bc937cf5433f17ed36390409d059df9e876,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_EXITED,CreatedAt:1733434132653591617,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7a91f119bc5b1d26a4eebc093c893c7c,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kuberne
tes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860,PodSandboxId:b3e1dc6d7a1bd19e1f2bec85519d33b147313a40eae884c48c03b31252700e0d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_EXITED,CreatedAt:1733434132614498298,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-068873,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50860d75af521a7befd511acd7d7a982,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc,PodSandboxId:7ffff40b1fb85807a5f596156ff2ec6ea1ad4a8dfd704fd7a3949d0ea30e9084,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_EXITED,CreatedAt:1733434087232651883,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-m89x5,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6ab80f4e-1848-432c-894c-213567ce8fe3,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"p
rotocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146,PodSandboxId:de24ba68f22e4f767cc25b3429d039ebefceadaf41f56b01c4e4e79f30721f80,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_EXITED,CreatedAt:1733434087033610961,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h8984,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 49532404-faec-41e0-8b53-c750a91316a2,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a9990784-378e-45a1-8fe7-f026bd91e634 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	900d051beb4b3       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   25 seconds ago       Running             kube-apiserver            2                   8f6a566611e8e       kube-apiserver-pause-068873
	3185656b82ff3       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   25 seconds ago       Running             kube-scheduler            2                   a9a7b1e38000a       kube-scheduler-pause-068873
	6529c21096477       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   25 seconds ago       Running             kube-controller-manager   2                   b3e1dc6d7a1bd       kube-controller-manager-pause-068873
	88ff7b87acb7e       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   25 seconds ago       Running             etcd                      2                   742c682aba1f3       etcd-pause-068873
	a264dbde443d0       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   38 seconds ago       Running             coredns                   1                   235cd1baa6837       coredns-7c65d6cfc9-m89x5
	b0c79898bdc6f       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   39 seconds ago       Running             kube-proxy                1                   78eda860655e0       kube-proxy-h8984
	9e5d1d60f086d       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   39 seconds ago       Exited              etcd                      1                   742c682aba1f3       etcd-pause-068873
	dea54f3f077c4       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   39 seconds ago       Exited              kube-apiserver            1                   8f6a566611e8e       kube-apiserver-pause-068873
	85815ac26a7ae       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   39 seconds ago       Exited              kube-scheduler            1                   a9a7b1e38000a       kube-scheduler-pause-068873
	5dc087a55ad29       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   39 seconds ago       Exited              kube-controller-manager   1                   b3e1dc6d7a1bd       kube-controller-manager-pause-068873
	47b85bb23ec9b       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   About a minute ago   Exited              coredns                   0                   7ffff40b1fb85       coredns-7c65d6cfc9-m89x5
	3892d8954e0a3       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   About a minute ago   Exited              kube-proxy                0                   de24ba68f22e4       kube-proxy-h8984
	
	
	==> coredns [47b85bb23ec9b1c69ca73efd49c36774e0b016015008fba6e884c4bc0eea3ebc] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:51231 - 52195 "HINFO IN 9055947614407638998.2670647647563321197. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024052667s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a264dbde443d01f8aac6dbf1b5a5bc8be55f24bc7b620d83df568162568d87e3] <==
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52838->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[2014967982]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 21:28:53.955) (total time: 10548ms):
	Trace[2014967982]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer 10548ms (21:29:04.503)
	Trace[2014967982]: [10.548089105s] [10.548089105s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52850->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: Trace[433478169]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (05-Dec-2024 21:28:53.953) (total time: 10550ms):
	Trace[433478169]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer 10550ms (21:29:04.503)
	Trace[433478169]: [10.550556975s] [10.550556975s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.4:52828->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               pause-068873
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-068873
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=pause-068873
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_28_01_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:27:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-068873
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:29:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:27:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:29:09 +0000   Thu, 05 Dec 2024 21:28:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.229
	  Hostname:    pause-068873
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015704Ki
	  pods:               110
	System Info:
	  Machine ID:                 7a5c3137534249e6a0c2bf6edb14181c
	  System UUID:                7a5c3137-5342-49e6-a0c2-bf6edb14181c
	  Boot ID:                    ebff1754-a929-4f7a-845b-5c160559166e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-m89x5                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     86s
	  kube-system                 etcd-pause-068873                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         91s
	  kube-system                 kube-apiserver-pause-068873             250m (12%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-controller-manager-pause-068873    200m (10%)    0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-proxy-h8984                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-pause-068873             100m (5%)     0 (0%)      0 (0%)           0 (0%)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 84s                kube-proxy       
	  Normal  Starting                 22s                kube-proxy       
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     91s                kubelet          Node pause-068873 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  91s                kubelet          Node pause-068873 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s                kubelet          Node pause-068873 status is now: NodeHasNoDiskPressure
	  Normal  NodeReady                91s                kubelet          Node pause-068873 status is now: NodeReady
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           87s                node-controller  Node pause-068873 event: Registered Node pause-068873 in Controller
	  Normal  Starting                 26s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s (x8 over 26s)  kubelet          Node pause-068873 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s (x8 over 26s)  kubelet          Node pause-068873 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s (x7 over 26s)  kubelet          Node pause-068873 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           20s                node-controller  Node pause-068873 event: Registered Node pause-068873 in Controller
	
	
	==> dmesg <==
	[  +0.056850] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.064884] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.192166] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.116968] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.297655] systemd-fstab-generator[652]: Ignoring "noauto" option for root device
	[  +4.206175] systemd-fstab-generator[743]: Ignoring "noauto" option for root device
	[  +4.777100] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.067964] kauditd_printk_skb: 158 callbacks suppressed
	[Dec 5 21:28] systemd-fstab-generator[1214]: Ignoring "noauto" option for root device
	[  +0.074308] kauditd_printk_skb: 69 callbacks suppressed
	[  +5.343707] systemd-fstab-generator[1337]: Ignoring "noauto" option for root device
	[  +0.125897] kauditd_printk_skb: 18 callbacks suppressed
	[  +5.581825] kauditd_printk_skb: 103 callbacks suppressed
	[ +30.771423] systemd-fstab-generator[2222]: Ignoring "noauto" option for root device
	[  +0.131526] systemd-fstab-generator[2234]: Ignoring "noauto" option for root device
	[  +0.161634] systemd-fstab-generator[2248]: Ignoring "noauto" option for root device
	[  +0.137428] systemd-fstab-generator[2260]: Ignoring "noauto" option for root device
	[  +0.276580] systemd-fstab-generator[2288]: Ignoring "noauto" option for root device
	[  +8.309436] systemd-fstab-generator[2410]: Ignoring "noauto" option for root device
	[  +0.074645] kauditd_printk_skb: 100 callbacks suppressed
	[Dec 5 21:29] kauditd_printk_skb: 86 callbacks suppressed
	[  +4.693453] systemd-fstab-generator[3183]: Ignoring "noauto" option for root device
	[  +0.340258] kauditd_printk_skb: 15 callbacks suppressed
	[  +7.043578] kauditd_printk_skb: 16 callbacks suppressed
	[ +12.491115] systemd-fstab-generator[3525]: Ignoring "noauto" option for root device
	
	
	==> etcd [88ff7b87acb7e28b3b08f6a068e5b2ad715e84b8e5299cf646493a6679127570] <==
	{"level":"info","ts":"2024-12-05T21:29:06.884179Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:06.894933Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T21:29:06.892178Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"82811d29f3e953c3","local-member-id":"c8f87299e6c07be2","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:29:06.896172Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.229:2380"}
	{"level":"info","ts":"2024-12-05T21:29:06.910290Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.229:2380"}
	{"level":"info","ts":"2024-12-05T21:29:06.910482Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:29:06.910805Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"c8f87299e6c07be2","initial-advertise-peer-urls":["https://192.168.72.229:2380"],"listen-peer-urls":["https://192.168.72.229:2380"],"advertise-client-urls":["https://192.168.72.229:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.229:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T21:29:06.910979Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T21:29:07.919185Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 is starting a new election at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919296Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919343Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 received MsgPreVoteResp from c8f87299e6c07be2 at term 2"}
	{"level":"info","ts":"2024-12-05T21:29:07.919385Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became candidate at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 received MsgVoteResp from c8f87299e6c07be2 at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919443Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"c8f87299e6c07be2 became leader at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.919468Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: c8f87299e6c07be2 elected leader c8f87299e6c07be2 at term 3"}
	{"level":"info","ts":"2024-12-05T21:29:07.924976Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"c8f87299e6c07be2","local-member-attributes":"{Name:pause-068873 ClientURLs:[https://192.168.72.229:2379]}","request-path":"/0/members/c8f87299e6c07be2/attributes","cluster-id":"82811d29f3e953c3","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:29:07.925232Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:29:07.926185Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:07.926938Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:29:07.927087Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:29:07.927293Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:29:07.927327Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:29:07.927801Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:29:07.928575Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.229:2379"}
	{"level":"info","ts":"2024-12-05T21:29:10.267220Z","caller":"traceutil/trace.go:171","msg":"trace[305759639] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"108.379247ms","start":"2024-12-05T21:29:10.158811Z","end":"2024-12-05T21:29:10.267190Z","steps":["trace[305759639] 'process raft request'  (duration: 103.740479ms)"],"step_count":1}
	
	
	==> etcd [9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8] <==
	
	
	==> kernel <==
	 21:29:32 up 2 min,  0 users,  load average: 0.50, 0.22, 0.08
	Linux pause-068873 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [900d051beb4b3a11d2c7411b4584d9835ed1d70e9dad87c0fc4c63772fd7c952] <==
	I1205 21:29:09.453383       1 aggregator.go:171] initial CRD sync complete...
	I1205 21:29:09.453523       1 autoregister_controller.go:144] Starting autoregister controller
	I1205 21:29:09.453576       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1205 21:29:09.454204       1 shared_informer.go:320] Caches are synced for configmaps
	I1205 21:29:09.491095       1 shared_informer.go:320] Caches are synced for node_authorizer
	I1205 21:29:09.510629       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 21:29:09.511872       1 policy_source.go:224] refreshing policies
	I1205 21:29:09.552651       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1205 21:29:09.552697       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1205 21:29:09.553207       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I1205 21:29:09.553681       1 cache.go:39] Caches are synced for autoregister controller
	I1205 21:29:09.553917       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1205 21:29:09.554007       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1205 21:29:09.555162       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1205 21:29:09.565789       1 handler_discovery.go:450] Starting ResourceDiscoveryManager
	I1205 21:29:09.571515       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	E1205 21:29:09.587332       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1205 21:29:10.348176       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1205 21:29:10.895763       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1205 21:29:10.916842       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1205 21:29:10.964804       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1205 21:29:11.008839       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1205 21:29:11.022309       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1205 21:29:13.116495       1 controller.go:615] quota admission added evaluator for: endpoints
	I1205 21:29:13.164582       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27] <==
	I1205 21:28:53.306207       1 options.go:228] external host was not specified, using 192.168.72.229
	I1205 21:28:53.312317       1 server.go:142] Version: v1.31.2
	I1205 21:28:53.312377       1 server.go:144] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:54.143140       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	W1205 21:28:54.143871       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:54.143955       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I1205 21:28:54.154095       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I1205 21:28:54.159149       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I1205 21:28:54.159234       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I1205 21:28:54.159497       1 instance.go:232] Using reconciler: lease
	W1205 21:28:54.162592       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.144486       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.144554       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:55.163970       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.580452       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.747206       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:56.918496       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:58.979782       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:59.649343       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:28:59.734875       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:29:03.197852       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860] <==
	I1205 21:28:53.803703       1 serving.go:386] Generated self-signed cert in-memory
	I1205 21:28:54.611369       1 controllermanager.go:197] "Starting" version="v1.31.2"
	I1205 21:28:54.611409       1 controllermanager.go:199] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:54.612745       1 dynamic_cafile_content.go:160] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I1205 21:28:54.612939       1 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I1205 21:28:54.612947       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I1205 21:28:54.612962       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	
	
	==> kube-controller-manager [6529c21096477198df6ea8c87227fc1058f6dbd903460837c80b85096c8de375] <==
	I1205 21:29:12.813077       1 shared_informer.go:320] Caches are synced for endpoint
	I1205 21:29:12.819506       1 shared_informer.go:320] Caches are synced for PV protection
	I1205 21:29:12.822844       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I1205 21:29:12.825398       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1205 21:29:12.830250       1 shared_informer.go:320] Caches are synced for TTL
	I1205 21:29:12.830323       1 shared_informer.go:320] Caches are synced for ReplicationController
	I1205 21:29:12.830632       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="67.983014ms"
	I1205 21:29:12.831236       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="110.233µs"
	I1205 21:29:12.836003       1 shared_informer.go:320] Caches are synced for taint
	I1205 21:29:12.836159       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1205 21:29:12.836303       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-068873"
	I1205 21:29:12.836374       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1205 21:29:12.839319       1 shared_informer.go:320] Caches are synced for GC
	I1205 21:29:12.842877       1 shared_informer.go:320] Caches are synced for daemon sets
	I1205 21:29:12.890315       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 21:29:12.894494       1 shared_informer.go:320] Caches are synced for deployment
	I1205 21:29:12.918649       1 shared_informer.go:320] Caches are synced for resource quota
	I1205 21:29:12.947632       1 shared_informer.go:320] Caches are synced for disruption
	I1205 21:29:13.000319       1 shared_informer.go:320] Caches are synced for attach detach
	I1205 21:29:13.016158       1 shared_informer.go:320] Caches are synced for persistent volume
	I1205 21:29:13.447210       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 21:29:13.463196       1 shared_informer.go:320] Caches are synced for garbage collector
	I1205 21:29:13.463225       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1205 21:29:20.725689       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="19.105952ms"
	I1205 21:29:20.725849       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="66.034µs"
	
	
	==> kube-proxy [3892d8954e0a332b3308223476fb5fd6d532234078eaeaf648642e6f90186146] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:28:07.474809       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:28:07.514363       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.229"]
	E1205 21:28:07.514608       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:28:07.546295       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:28:07.546343       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:28:07.546369       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:28:07.550378       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:28:07.551352       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:28:07.551381       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:28:07.553624       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:28:07.554123       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:28:07.554190       1 config.go:199] "Starting service config controller"
	I1205 21:28:07.554209       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:28:07.555412       1 config.go:328] "Starting node config controller"
	I1205 21:28:07.555447       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:28:07.654675       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:28:07.654747       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:28:07.656093       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [b0c79898bdc6f355ad527160dd5503e0036465e0d840b040c0b4bb8d0700811d] <==
	 >
	E1205 21:28:54.309209       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:29:04.502355       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-068873\": dial tcp 192.168.72.229:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.229:38798->192.168.72.229:8443: read: connection reset by peer"
	E1205 21:29:05.646377       1 server.go:666] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/pause-068873\": dial tcp 192.168.72.229:8443: connect: connection refused"
	I1205 21:29:09.501314       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.229"]
	E1205 21:29:09.501536       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:29:09.583253       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:29:09.583373       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:29:09.583412       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:29:09.586444       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:29:09.586702       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:29:09.586726       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:29:09.588265       1 config.go:199] "Starting service config controller"
	I1205 21:29:09.588299       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:29:09.588330       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:29:09.588334       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:29:09.588770       1 config.go:328] "Starting node config controller"
	I1205 21:29:09.588795       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:29:09.688422       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:29:09.688488       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:29:09.689139       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [3185656b82ff3ab93b94ab9f18a3e35d93c8126b9947c8545d8597d3e195287c] <==
	I1205 21:29:07.540492       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:29:09.480530       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 21:29:09.480716       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 21:29:09.480825       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:29:09.480866       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:29:09.525003       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:29:09.528104       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:29:09.530803       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 21:29:09.530888       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:29:09.531597       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 21:29:09.531700       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 21:29:09.631619       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d] <==
	I1205 21:28:53.946204       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:29:04.501301       1 authentication.go:370] Error looking up in-cluster authentication configuration: Get "https://192.168.72.229:8443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication": dial tcp 192.168.72.229:8443: connect: connection refused - error from a previous attempt: read tcp 192.168.72.229:38782->192.168.72.229:8443: read: connection reset by peer
	W1205 21:29:04.501339       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:29:04.501348       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:29:04.514848       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:29:04.514892       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	E1205 21:29:04.514912       1 event.go:401] "Unable start event watcher (will not retry!)" err="broadcaster already stopped"
	I1205 21:29:04.517001       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	E1205 21:29:04.517260       1 server.go:267] "waiting for handlers to sync" err="context canceled"
	E1205 21:29:04.517354       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.238785    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/50860d75af521a7befd511acd7d7a982-k8s-certs\") pod \"kube-controller-manager-pause-068873\" (UID: \"50860d75af521a7befd511acd7d7a982\") " pod="kube-system/kube-controller-manager-pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.424467    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.425541    3190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.229:8443: connect: connection refused" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.469194    3190 scope.go:117] "RemoveContainer" containerID="9e5d1d60f086d32798fa3df1e5c9927d206565ed86871e1751c8692cd4952dc8"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.471671    3190 scope.go:117] "RemoveContainer" containerID="dea54f3f077c4172cb2b439c4b5ad262b0b9c13ef0822feb2a281f680a745a27"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.472099    3190 scope.go:117] "RemoveContainer" containerID="5dc087a55ad29c8cd45df03a1adf6319c36abaada5002a4c7d26ab651bb65860"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.475154    3190 scope.go:117] "RemoveContainer" containerID="85815ac26a7aed256a15fd57f62ca745ed7ef46f0f47afcde087af82f2abad8d"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.636597    3190 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-068873?timeout=10s\": dial tcp 192.168.72.229:8443: connect: connection refused" interval="800ms"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: I1205 21:29:06.827306    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.828413    3190 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.229:8443: connect: connection refused" node="pause-068873"
	Dec 05 21:29:06 pause-068873 kubelet[3190]: W1205 21:29:06.890704    3190 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.72.229:8443: connect: connection refused
	Dec 05 21:29:06 pause-068873 kubelet[3190]: E1205 21:29:06.890837    3190 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://control-plane.minikube.internal:8443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.72.229:8443: connect: connection refused" logger="UnhandledError"
	Dec 05 21:29:07 pause-068873 kubelet[3190]: I1205 21:29:07.630080    3190 kubelet_node_status.go:72] "Attempting to register node" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.598948    3190 kubelet_node_status.go:111] "Node was previously registered" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.599111    3190 kubelet_node_status.go:75] "Successfully registered node" node="pause-068873"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.599142    3190 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Dec 05 21:29:09 pause-068873 kubelet[3190]: I1205 21:29:09.600062    3190 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.010633    3190 apiserver.go:52] "Watching apiserver"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.033396    3190 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.045290    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49532404-faec-41e0-8b53-c750a91316a2-lib-modules\") pod \"kube-proxy-h8984\" (UID: \"49532404-faec-41e0-8b53-c750a91316a2\") " pod="kube-system/kube-proxy-h8984"
	Dec 05 21:29:10 pause-068873 kubelet[3190]: I1205 21:29:10.045408    3190 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49532404-faec-41e0-8b53-c750a91316a2-xtables-lock\") pod \"kube-proxy-h8984\" (UID: \"49532404-faec-41e0-8b53-c750a91316a2\") " pod="kube-system/kube-proxy-h8984"
	Dec 05 21:29:16 pause-068873 kubelet[3190]: E1205 21:29:16.127532    3190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434156126743726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:16 pause-068873 kubelet[3190]: E1205 21:29:16.127961    3190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434156126743726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:26 pause-068873 kubelet[3190]: E1205 21:29:26.132162    3190 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434166129527911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:29:26 pause-068873 kubelet[3190]: E1205 21:29:26.132230    3190 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733434166129527911,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125701,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-068873 -n pause-068873
helpers_test.go:261: (dbg) Run:  kubectl --context pause-068873 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (77.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (289.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m49.247012155s)

                                                
                                                
-- stdout --
	* [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:31:42.198113  350936 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:31:42.198364  350936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:31:42.198403  350936 out.go:358] Setting ErrFile to fd 2...
	I1205 21:31:42.198422  350936 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:31:42.198771  350936 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:31:42.199667  350936 out.go:352] Setting JSON to false
	I1205 21:31:42.201249  350936 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15250,"bootTime":1733419052,"procs":284,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:31:42.201397  350936 start.go:139] virtualization: kvm guest
	I1205 21:31:42.204030  350936 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:31:42.205561  350936 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:31:42.206010  350936 notify.go:220] Checking for updates...
	I1205 21:31:42.208346  350936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:31:42.209809  350936 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:31:42.211291  350936 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:31:42.214073  350936 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:31:42.215582  350936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:31:42.217600  350936 config.go:182] Loaded profile config "bridge-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:31:42.217758  350936 config.go:182] Loaded profile config "enable-default-cni-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:31:42.217888  350936 config.go:182] Loaded profile config "flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:31:42.218121  350936 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:31:42.286567  350936 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:31:42.287965  350936 start.go:297] selected driver: kvm2
	I1205 21:31:42.287990  350936 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:31:42.288010  350936 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:31:42.289259  350936 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:31:42.289357  350936 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:31:42.315692  350936 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:31:42.315783  350936 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 21:31:42.316098  350936 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:31:42.316141  350936 cni.go:84] Creating CNI manager for ""
	I1205 21:31:42.316184  350936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:31:42.316197  350936 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 21:31:42.316270  350936 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:31:42.316412  350936 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:31:42.318378  350936 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:31:42.319753  350936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:31:42.319815  350936 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:31:42.319829  350936 cache.go:56] Caching tarball of preloaded images
	I1205 21:31:42.319943  350936 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:31:42.319956  350936 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:31:42.320096  350936 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:31:42.320121  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json: {Name:mk9d3cb1b152da1e9147266bb1e2ab07932e3876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:31:42.320325  350936 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:31:59.407481  350936 start.go:364] duration metric: took 17.087111818s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:31:59.407579  350936 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:31:59.407661  350936 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 21:31:59.409936  350936 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 21:31:59.410183  350936 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:31:59.410249  350936 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:31:59.428609  350936 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34807
	I1205 21:31:59.429152  350936 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:31:59.429893  350936 main.go:141] libmachine: Using API Version  1
	I1205 21:31:59.429959  350936 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:31:59.430386  350936 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:31:59.430639  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:31:59.430815  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:31:59.430998  350936 start.go:159] libmachine.API.Create for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:31:59.431043  350936 client.go:168] LocalClient.Create starting
	I1205 21:31:59.431085  350936 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 21:31:59.431128  350936 main.go:141] libmachine: Decoding PEM data...
	I1205 21:31:59.431144  350936 main.go:141] libmachine: Parsing certificate...
	I1205 21:31:59.431220  350936 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 21:31:59.431248  350936 main.go:141] libmachine: Decoding PEM data...
	I1205 21:31:59.431263  350936 main.go:141] libmachine: Parsing certificate...
	I1205 21:31:59.431285  350936 main.go:141] libmachine: Running pre-create checks...
	I1205 21:31:59.431299  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .PreCreateCheck
	I1205 21:31:59.431734  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:31:59.432252  350936 main.go:141] libmachine: Creating machine...
	I1205 21:31:59.432271  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .Create
	I1205 21:31:59.432431  350936 main.go:141] libmachine: (old-k8s-version-601806) Creating KVM machine...
	I1205 21:31:59.433853  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found existing default KVM network
	I1205 21:31:59.435437  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:31:59.435246  352357 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c7:1d:41} reservation:<nil>}
	I1205 21:31:59.436369  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:31:59.436268  352357 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:9f:de:0b} reservation:<nil>}
	I1205 21:31:59.437471  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:31:59.437377  352357 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002ce810}
	I1205 21:31:59.437500  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | created network xml: 
	I1205 21:31:59.437509  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | <network>
	I1205 21:31:59.437514  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   <name>mk-old-k8s-version-601806</name>
	I1205 21:31:59.437530  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   <dns enable='no'/>
	I1205 21:31:59.437536  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   
	I1205 21:31:59.437571  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1205 21:31:59.437598  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |     <dhcp>
	I1205 21:31:59.437611  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1205 21:31:59.437625  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |     </dhcp>
	I1205 21:31:59.437639  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   </ip>
	I1205 21:31:59.437653  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG |   
	I1205 21:31:59.437719  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | </network>
	I1205 21:31:59.437747  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | 
	I1205 21:31:59.443829  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | trying to create private KVM network mk-old-k8s-version-601806 192.168.61.0/24...
	I1205 21:31:59.529257  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | private KVM network mk-old-k8s-version-601806 192.168.61.0/24 created
	I1205 21:31:59.529296  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806 ...
	I1205 21:31:59.529310  350936 main.go:141] libmachine: (old-k8s-version-601806) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 21:31:59.529328  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:31:59.529206  352357 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:31:59.529344  350936 main.go:141] libmachine: (old-k8s-version-601806) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 21:31:59.847028  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:31:59.846871  352357 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa...
	I1205 21:32:00.006219  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:00.006059  352357 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/old-k8s-version-601806.rawdisk...
	I1205 21:32:00.006263  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Writing magic tar header
	I1205 21:32:00.006286  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Writing SSH key tar header
	I1205 21:32:00.006305  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:00.006220  352357 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806 ...
	I1205 21:32:00.006394  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806
	I1205 21:32:00.006429  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 21:32:00.006448  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806 (perms=drwx------)
	I1205 21:32:00.006458  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:32:00.006483  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 21:32:00.006502  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 21:32:00.006516  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 21:32:00.006529  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 21:32:00.006542  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 21:32:00.006554  350936 main.go:141] libmachine: (old-k8s-version-601806) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 21:32:00.006589  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 21:32:00.006630  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home/jenkins
	I1205 21:32:00.006639  350936 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:32:00.006672  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Checking permissions on dir: /home
	I1205 21:32:00.006701  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Skipping /home - not owner
	I1205 21:32:00.007671  350936 main.go:141] libmachine: (old-k8s-version-601806) define libvirt domain using xml: 
	I1205 21:32:00.007693  350936 main.go:141] libmachine: (old-k8s-version-601806) <domain type='kvm'>
	I1205 21:32:00.007700  350936 main.go:141] libmachine: (old-k8s-version-601806)   <name>old-k8s-version-601806</name>
	I1205 21:32:00.007705  350936 main.go:141] libmachine: (old-k8s-version-601806)   <memory unit='MiB'>2200</memory>
	I1205 21:32:00.007710  350936 main.go:141] libmachine: (old-k8s-version-601806)   <vcpu>2</vcpu>
	I1205 21:32:00.007715  350936 main.go:141] libmachine: (old-k8s-version-601806)   <features>
	I1205 21:32:00.007719  350936 main.go:141] libmachine: (old-k8s-version-601806)     <acpi/>
	I1205 21:32:00.007725  350936 main.go:141] libmachine: (old-k8s-version-601806)     <apic/>
	I1205 21:32:00.007733  350936 main.go:141] libmachine: (old-k8s-version-601806)     <pae/>
	I1205 21:32:00.007739  350936 main.go:141] libmachine: (old-k8s-version-601806)     
	I1205 21:32:00.007747  350936 main.go:141] libmachine: (old-k8s-version-601806)   </features>
	I1205 21:32:00.007755  350936 main.go:141] libmachine: (old-k8s-version-601806)   <cpu mode='host-passthrough'>
	I1205 21:32:00.007774  350936 main.go:141] libmachine: (old-k8s-version-601806)   
	I1205 21:32:00.007785  350936 main.go:141] libmachine: (old-k8s-version-601806)   </cpu>
	I1205 21:32:00.007813  350936 main.go:141] libmachine: (old-k8s-version-601806)   <os>
	I1205 21:32:00.007845  350936 main.go:141] libmachine: (old-k8s-version-601806)     <type>hvm</type>
	I1205 21:32:00.007869  350936 main.go:141] libmachine: (old-k8s-version-601806)     <boot dev='cdrom'/>
	I1205 21:32:00.007887  350936 main.go:141] libmachine: (old-k8s-version-601806)     <boot dev='hd'/>
	I1205 21:32:00.007900  350936 main.go:141] libmachine: (old-k8s-version-601806)     <bootmenu enable='no'/>
	I1205 21:32:00.007909  350936 main.go:141] libmachine: (old-k8s-version-601806)   </os>
	I1205 21:32:00.007917  350936 main.go:141] libmachine: (old-k8s-version-601806)   <devices>
	I1205 21:32:00.007925  350936 main.go:141] libmachine: (old-k8s-version-601806)     <disk type='file' device='cdrom'>
	I1205 21:32:00.007934  350936 main.go:141] libmachine: (old-k8s-version-601806)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/boot2docker.iso'/>
	I1205 21:32:00.007950  350936 main.go:141] libmachine: (old-k8s-version-601806)       <target dev='hdc' bus='scsi'/>
	I1205 21:32:00.007958  350936 main.go:141] libmachine: (old-k8s-version-601806)       <readonly/>
	I1205 21:32:00.007967  350936 main.go:141] libmachine: (old-k8s-version-601806)     </disk>
	I1205 21:32:00.007985  350936 main.go:141] libmachine: (old-k8s-version-601806)     <disk type='file' device='disk'>
	I1205 21:32:00.008005  350936 main.go:141] libmachine: (old-k8s-version-601806)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 21:32:00.008021  350936 main.go:141] libmachine: (old-k8s-version-601806)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/old-k8s-version-601806.rawdisk'/>
	I1205 21:32:00.008033  350936 main.go:141] libmachine: (old-k8s-version-601806)       <target dev='hda' bus='virtio'/>
	I1205 21:32:00.008045  350936 main.go:141] libmachine: (old-k8s-version-601806)     </disk>
	I1205 21:32:00.008056  350936 main.go:141] libmachine: (old-k8s-version-601806)     <interface type='network'>
	I1205 21:32:00.008065  350936 main.go:141] libmachine: (old-k8s-version-601806)       <source network='mk-old-k8s-version-601806'/>
	I1205 21:32:00.008080  350936 main.go:141] libmachine: (old-k8s-version-601806)       <model type='virtio'/>
	I1205 21:32:00.008091  350936 main.go:141] libmachine: (old-k8s-version-601806)     </interface>
	I1205 21:32:00.008102  350936 main.go:141] libmachine: (old-k8s-version-601806)     <interface type='network'>
	I1205 21:32:00.008115  350936 main.go:141] libmachine: (old-k8s-version-601806)       <source network='default'/>
	I1205 21:32:00.008125  350936 main.go:141] libmachine: (old-k8s-version-601806)       <model type='virtio'/>
	I1205 21:32:00.008133  350936 main.go:141] libmachine: (old-k8s-version-601806)     </interface>
	I1205 21:32:00.008151  350936 main.go:141] libmachine: (old-k8s-version-601806)     <serial type='pty'>
	I1205 21:32:00.008163  350936 main.go:141] libmachine: (old-k8s-version-601806)       <target port='0'/>
	I1205 21:32:00.008170  350936 main.go:141] libmachine: (old-k8s-version-601806)     </serial>
	I1205 21:32:00.008181  350936 main.go:141] libmachine: (old-k8s-version-601806)     <console type='pty'>
	I1205 21:32:00.008192  350936 main.go:141] libmachine: (old-k8s-version-601806)       <target type='serial' port='0'/>
	I1205 21:32:00.008201  350936 main.go:141] libmachine: (old-k8s-version-601806)     </console>
	I1205 21:32:00.008215  350936 main.go:141] libmachine: (old-k8s-version-601806)     <rng model='virtio'>
	I1205 21:32:00.008229  350936 main.go:141] libmachine: (old-k8s-version-601806)       <backend model='random'>/dev/random</backend>
	I1205 21:32:00.008239  350936 main.go:141] libmachine: (old-k8s-version-601806)     </rng>
	I1205 21:32:00.008250  350936 main.go:141] libmachine: (old-k8s-version-601806)     
	I1205 21:32:00.008259  350936 main.go:141] libmachine: (old-k8s-version-601806)     
	I1205 21:32:00.008267  350936 main.go:141] libmachine: (old-k8s-version-601806)   </devices>
	I1205 21:32:00.008277  350936 main.go:141] libmachine: (old-k8s-version-601806) </domain>
	I1205 21:32:00.008288  350936 main.go:141] libmachine: (old-k8s-version-601806) 
	I1205 21:32:00.012240  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:a6:35:c4 in network default
	I1205 21:32:00.012890  350936 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:32:00.012918  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:00.013671  350936 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:32:00.014089  350936 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:32:00.014575  350936 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:32:00.015369  350936 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:32:01.725393  350936 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:32:01.726645  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:01.727174  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:01.727203  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:01.727060  352357 retry.go:31] will retry after 264.306995ms: waiting for machine to come up
	I1205 21:32:01.993305  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:01.994126  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:01.994181  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:01.994035  352357 retry.go:31] will retry after 362.834659ms: waiting for machine to come up
	I1205 21:32:02.359120  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:02.359717  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:02.359743  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:02.359679  352357 retry.go:31] will retry after 413.0481ms: waiting for machine to come up
	I1205 21:32:02.774187  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:02.774948  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:02.774977  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:02.774896  352357 retry.go:31] will retry after 506.45385ms: waiting for machine to come up
	I1205 21:32:03.282771  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:03.283442  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:03.283487  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:03.283377  352357 retry.go:31] will retry after 749.433104ms: waiting for machine to come up
	I1205 21:32:04.034347  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:04.035032  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:04.035064  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:04.034968  352357 retry.go:31] will retry after 749.510247ms: waiting for machine to come up
	I1205 21:32:04.786813  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:04.787469  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:04.787500  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:04.787404  352357 retry.go:31] will retry after 794.090475ms: waiting for machine to come up
	I1205 21:32:05.583278  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:05.583762  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:05.583804  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:05.583715  352357 retry.go:31] will retry after 1.252772899s: waiting for machine to come up
	I1205 21:32:06.837986  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:06.838546  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:06.838603  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:06.838521  352357 retry.go:31] will retry after 1.600079536s: waiting for machine to come up
	I1205 21:32:08.439997  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:08.440540  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:08.440577  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:08.440475  352357 retry.go:31] will retry after 1.805347518s: waiting for machine to come up
	I1205 21:32:10.247371  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:10.248084  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:10.248115  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:10.248005  352357 retry.go:31] will retry after 2.447108415s: waiting for machine to come up
	I1205 21:32:12.698877  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:12.699360  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:12.699391  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:12.699351  352357 retry.go:31] will retry after 2.718721786s: waiting for machine to come up
	I1205 21:32:15.419304  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:15.419844  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:15.419878  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:15.419767  352357 retry.go:31] will retry after 3.089042225s: waiting for machine to come up
	I1205 21:32:18.513153  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:18.513766  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:32:18.513799  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:32:18.513699  352357 retry.go:31] will retry after 4.946828315s: waiting for machine to come up
	I1205 21:32:23.463497  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.463919  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.463945  350936 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:32:23.463955  350936 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:32:23.464314  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806
	I1205 21:32:23.554219  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:32:23.554253  350936 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:32:23.554266  350936 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:32:23.557294  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.557831  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:minikube Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:23.557871  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.558058  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:32:23.558086  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:32:23.558115  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:32:23.558133  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:32:23.558150  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:32:23.686000  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:32:23.686316  350936 main.go:141] libmachine: (old-k8s-version-601806) KVM machine creation complete!
	I1205 21:32:23.686687  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:32:23.687273  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:23.687559  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:23.687714  350936 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I1205 21:32:23.687731  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:32:23.689154  350936 main.go:141] libmachine: Detecting operating system of created instance...
	I1205 21:32:23.689172  350936 main.go:141] libmachine: Waiting for SSH to be available...
	I1205 21:32:23.689178  350936 main.go:141] libmachine: Getting to WaitForSSH function...
	I1205 21:32:23.689184  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:23.691806  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.692321  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:23.692352  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.692530  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:23.692756  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.692990  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.693196  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:23.693416  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:23.693685  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:23.693698  350936 main.go:141] libmachine: About to run SSH command:
	exit 0
	I1205 21:32:23.805547  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:32:23.805577  350936 main.go:141] libmachine: Detecting the provisioner...
	I1205 21:32:23.805588  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:23.808621  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.808981  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:23.809026  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.809266  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:23.809540  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.809743  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.809975  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:23.810198  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:23.810456  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:23.810473  350936 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I1205 21:32:23.926702  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I1205 21:32:23.926799  350936 main.go:141] libmachine: found compatible host: buildroot
	I1205 21:32:23.926807  350936 main.go:141] libmachine: Provisioning with buildroot...
	I1205 21:32:23.926816  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:32:23.927107  350936 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:32:23.927134  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:32:23.927356  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:23.929991  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.930414  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:23.930444  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:23.930639  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:23.930850  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.930980  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:23.931096  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:23.931234  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:23.931460  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:23.931481  350936 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:32:24.056406  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:32:24.056436  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.059615  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.060013  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.060042  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.060286  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.060522  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.060760  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.060901  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.061097  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:24.061344  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:24.061367  350936 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:32:24.183260  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:32:24.183320  350936 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:32:24.183358  350936 buildroot.go:174] setting up certificates
	I1205 21:32:24.183374  350936 provision.go:84] configureAuth start
	I1205 21:32:24.183390  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:32:24.183748  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:32:24.186610  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.187058  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.187093  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.187303  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.189716  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.190142  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.190182  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.190353  350936 provision.go:143] copyHostCerts
	I1205 21:32:24.190415  350936 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:32:24.190437  350936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:32:24.190496  350936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:32:24.190605  350936 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:32:24.190615  350936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:32:24.190636  350936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:32:24.190734  350936 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:32:24.190745  350936 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:32:24.190779  350936 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:32:24.190853  350936 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:32:24.297818  350936 provision.go:177] copyRemoteCerts
	I1205 21:32:24.297888  350936 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:32:24.297953  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.301306  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.301866  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.301919  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.302134  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.302372  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.302605  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.302816  350936 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:32:24.389231  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:32:24.415127  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:32:24.439079  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:32:24.463310  350936 provision.go:87] duration metric: took 279.91725ms to configureAuth
	I1205 21:32:24.463353  350936 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:32:24.463525  350936 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:32:24.463606  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.466567  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.467101  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.467138  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.467346  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.467609  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.467831  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.468008  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.468197  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:24.468385  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:24.468402  350936 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:32:24.699211  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:32:24.699245  350936 main.go:141] libmachine: Checking connection to Docker...
	I1205 21:32:24.699255  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetURL
	I1205 21:32:24.700761  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using libvirt version 6000000
	I1205 21:32:24.703231  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.703574  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.703610  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.703811  350936 main.go:141] libmachine: Docker is up and running!
	I1205 21:32:24.703834  350936 main.go:141] libmachine: Reticulating splines...
	I1205 21:32:24.703843  350936 client.go:171] duration metric: took 25.272787335s to LocalClient.Create
	I1205 21:32:24.703868  350936 start.go:167] duration metric: took 25.272871937s to libmachine.API.Create "old-k8s-version-601806"
	I1205 21:32:24.703883  350936 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:32:24.703899  350936 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:32:24.703925  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:24.704225  350936 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:32:24.704259  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.706690  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.707005  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.707030  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.707141  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.707367  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.707537  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.707740  350936 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:32:24.797651  350936 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:32:24.802465  350936 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:32:24.802501  350936 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:32:24.802613  350936 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:32:24.802713  350936 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:32:24.802805  350936 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:32:24.813940  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:32:24.839785  350936 start.go:296] duration metric: took 135.880837ms for postStartSetup
	I1205 21:32:24.839878  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:32:24.840622  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:32:24.843477  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.843839  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.843872  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.844184  350936 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:32:24.844449  350936 start.go:128] duration metric: took 25.436772791s to createHost
	I1205 21:32:24.844488  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.846914  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.847187  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.847214  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.847314  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.847508  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.847662  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.847758  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.847948  350936 main.go:141] libmachine: Using SSH client type: native
	I1205 21:32:24.848126  350936 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:32:24.848143  350936 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:32:24.962737  350936 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434344.935633094
	
	I1205 21:32:24.962780  350936 fix.go:216] guest clock: 1733434344.935633094
	I1205 21:32:24.962792  350936 fix.go:229] Guest: 2024-12-05 21:32:24.935633094 +0000 UTC Remote: 2024-12-05 21:32:24.844466308 +0000 UTC m=+42.710827719 (delta=91.166786ms)
	I1205 21:32:24.962825  350936 fix.go:200] guest clock delta is within tolerance: 91.166786ms
	I1205 21:32:24.962834  350936 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 25.555304823s
	I1205 21:32:24.962872  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:24.963227  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:32:24.966772  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.967199  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.967235  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.967366  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:24.967927  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:24.968088  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:32:24.968180  350936 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:32:24.968253  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.968333  350936 ssh_runner.go:195] Run: cat /version.json
	I1205 21:32:24.968363  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:32:24.971192  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.971474  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.971689  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.971715  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.971889  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.971980  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:24.972019  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:24.972083  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.972178  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:32:24.972276  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.972360  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:32:24.972427  350936 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:32:24.972528  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:32:24.972713  350936 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:32:25.055905  350936 ssh_runner.go:195] Run: systemctl --version
	I1205 21:32:25.079320  350936 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:32:25.240747  350936 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:32:25.246717  350936 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:32:25.246826  350936 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:32:25.263339  350936 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:32:25.263376  350936 start.go:495] detecting cgroup driver to use...
	I1205 21:32:25.263460  350936 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:32:25.280604  350936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:32:25.294591  350936 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:32:25.294661  350936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:32:25.309781  350936 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:32:25.324879  350936 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:32:25.440302  350936 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:32:25.588199  350936 docker.go:233] disabling docker service ...
	I1205 21:32:25.588277  350936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:32:25.613412  350936 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:32:25.630494  350936 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:32:25.789656  350936 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:32:25.943608  350936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:32:25.960843  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:32:25.983259  350936 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:32:25.983314  350936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:32:25.996838  350936 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:32:25.996931  350936 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:32:26.010695  350936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:32:26.025219  350936 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:32:26.039129  350936 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:32:26.053960  350936 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:32:26.066237  350936 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:32:26.066312  350936 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:32:26.081871  350936 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:32:26.092588  350936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:32:26.228579  350936 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:32:26.370120  350936 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:32:26.370192  350936 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:32:26.375671  350936 start.go:563] Will wait 60s for crictl version
	I1205 21:32:26.375760  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:26.380054  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:32:26.423746  350936 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:32:26.423844  350936 ssh_runner.go:195] Run: crio --version
	I1205 21:32:26.461285  350936 ssh_runner.go:195] Run: crio --version
	I1205 21:32:26.493768  350936 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:32:26.494985  350936 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:32:26.501202  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:26.501884  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:32:15 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:32:26.501952  350936 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:32:26.502250  350936 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:32:26.508693  350936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:32:26.526377  350936 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:32:26.526512  350936 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:32:26.526578  350936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:32:26.567226  350936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:32:26.567316  350936 ssh_runner.go:195] Run: which lz4
	I1205 21:32:26.572414  350936 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:32:26.577704  350936 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:32:26.577743  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:32:28.225246  350936 crio.go:462] duration metric: took 1.652863575s to copy over tarball
	I1205 21:32:28.225403  350936 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:32:31.252538  350936 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.027072689s)
	I1205 21:32:31.252584  350936 crio.go:469] duration metric: took 3.027236672s to extract the tarball
	I1205 21:32:31.252593  350936 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:32:31.299554  350936 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:32:31.353312  350936 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:32:31.353338  350936 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:32:31.353385  350936 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:32:31.353685  350936 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.353876  350936 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.354055  350936 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.354186  350936 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.354308  350936 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:32:31.354438  350936 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.354582  350936 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:32:31.356463  350936 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.356497  350936 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.356471  350936 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:32:31.356540  350936 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.356566  350936 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.356480  350936 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:32:31.356478  350936 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:32:31.356477  350936 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.517958  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.524671  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.526840  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.527670  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.530410  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:32:31.531654  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.573198  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:32:31.755967  350936 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:32:31.756006  350936 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:32:31.756024  350936 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.756040  350936 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:32:31.756074  350936 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.756079  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.756081  350936 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:32:31.756103  350936 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:32:31.756118  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.756136  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.756144  350936 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:32:31.756160  350936 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.756184  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.756045  350936 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.756180  350936 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:32:31.756218  350936 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.756219  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.756238  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.799518  350936 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:32:31.799571  350936 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:32:31.799605  350936 ssh_runner.go:195] Run: which crictl
	I1205 21:32:31.799605  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.799607  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.799662  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.799688  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:32:31.799695  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.799716  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.940834  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:32:31.940931  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:31.940977  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:31.941023  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:31.941126  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:31.941237  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:31.941293  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:32:32.122194  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:32:32.122254  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:32:32.122271  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:32:32.122296  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:32:32.122316  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:32:32.122353  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:32:32.122389  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:32:32.249688  350936 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:32:32.282672  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:32:32.282694  350936 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:32:32.282730  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:32:32.282802  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:32:32.282899  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:32:32.282969  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:32:32.283052  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:32:32.445422  350936 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:32:32.445492  350936 cache_images.go:92] duration metric: took 1.092139764s to LoadCachedImages
	W1205 21:32:32.445577  350936 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I1205 21:32:32.445597  350936 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:32:32.445724  350936 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:32:32.445805  350936 ssh_runner.go:195] Run: crio config
	I1205 21:32:32.495057  350936 cni.go:84] Creating CNI manager for ""
	I1205 21:32:32.495086  350936 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:32:32.495101  350936 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:32:32.495129  350936 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:32:32.495260  350936 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:32:32.495326  350936 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:32:32.505721  350936 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:32:32.505811  350936 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:32:32.516712  350936 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:32:32.536150  350936 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:32:32.556444  350936 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:32:32.577187  350936 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:32:32.581376  350936 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:32:32.595590  350936 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:32:32.729096  350936 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:32:32.747814  350936 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:32:32.747838  350936 certs.go:194] generating shared ca certs ...
	I1205 21:32:32.747862  350936 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:32.748035  350936 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:32:32.748092  350936 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:32:32.748107  350936 certs.go:256] generating profile certs ...
	I1205 21:32:32.748181  350936 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:32:32.748204  350936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.crt with IP's: []
	I1205 21:32:32.898176  350936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.crt ...
	I1205 21:32:32.898229  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.crt: {Name:mk5d4b178ebe30dfc75b4b4d85e2385dc08934de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:32.898470  350936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key ...
	I1205 21:32:32.898501  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key: {Name:mkda7ea710db834efad8057f836b9a5f2fe983af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:32.898646  350936 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:32:32.898670  350936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt.a6e43dea with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.123]
	I1205 21:32:33.048305  350936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt.a6e43dea ...
	I1205 21:32:33.048346  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt.a6e43dea: {Name:mk65b86e0069ac36e3f04ad4b405303b98cb639f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:33.094953  350936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea ...
	I1205 21:32:33.095009  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea: {Name:mk7c566bf554f3bc9da38a2beb3b35f8eb3d8c84 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:33.095207  350936 certs.go:381] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt.a6e43dea -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt
	I1205 21:32:33.095319  350936 certs.go:385] copying /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea -> /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key
	I1205 21:32:33.095409  350936 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:32:33.095458  350936 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt with IP's: []
	I1205 21:32:33.423514  350936 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt ...
	I1205 21:32:33.423550  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt: {Name:mk2aafca6e2b6950442f4330ad808cf7806ff9c1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:33.423729  350936 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key ...
	I1205 21:32:33.423743  350936 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key: {Name:mk01306fa4ff8e3ca8abae7b07c2fed8f90c47e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:32:33.423914  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:32:33.423952  350936 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:32:33.423963  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:32:33.423986  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:32:33.424009  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:32:33.424032  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:32:33.424072  350936 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:32:33.424949  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:32:33.454252  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:32:33.484612  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:32:33.512506  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:32:33.541175  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:32:33.570042  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:32:33.598321  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:32:33.623353  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:32:33.653872  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:32:33.683337  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:32:33.711797  350936 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:32:33.738009  350936 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:32:33.756196  350936 ssh_runner.go:195] Run: openssl version
	I1205 21:32:33.762433  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:32:33.774449  350936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:32:33.779229  350936 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:32:33.779307  350936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:32:33.785320  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:32:33.797013  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:32:33.809197  350936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:32:33.814145  350936 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:32:33.814246  350936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:32:33.820373  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:32:33.832553  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:32:33.844411  350936 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:32:33.849017  350936 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:32:33.849100  350936 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:32:33.855101  350936 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:32:33.869642  350936 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:32:33.874789  350936 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1205 21:32:33.874864  350936 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:32:33.874988  350936 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:32:33.875064  350936 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:32:33.920793  350936 cri.go:89] found id: ""
	I1205 21:32:33.920878  350936 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:32:33.933342  350936 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:32:33.944992  350936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:32:33.955406  350936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:32:33.955434  350936 kubeadm.go:157] found existing configuration files:
	
	I1205 21:32:33.955498  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:32:33.965275  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:32:33.965421  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:32:33.975701  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:32:33.985066  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:32:33.985135  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:32:33.995244  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:32:34.008280  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:32:34.008367  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:32:34.019811  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:32:34.032242  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:32:34.032321  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:32:34.042814  350936 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:32:34.325881  350936 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:34:32.663025  350936 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:34:32.663132  350936 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:34:32.664423  350936 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:34:32.664493  350936 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:34:32.664589  350936 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:34:32.664725  350936 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:34:32.664860  350936 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:34:32.664950  350936 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:34:32.666411  350936 out.go:235]   - Generating certificates and keys ...
	I1205 21:34:32.666511  350936 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:34:32.666587  350936 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:34:32.666696  350936 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1205 21:34:32.666782  350936 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1205 21:34:32.666876  350936 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1205 21:34:32.666948  350936 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1205 21:34:32.667024  350936 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1205 21:34:32.667218  350936 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	I1205 21:34:32.667289  350936 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1205 21:34:32.667483  350936 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	I1205 21:34:32.667596  350936 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1205 21:34:32.667666  350936 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1205 21:34:32.667736  350936 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1205 21:34:32.667823  350936 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:34:32.667898  350936 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:34:32.667978  350936 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:34:32.668069  350936 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:34:32.668160  350936 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:34:32.668294  350936 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:34:32.668426  350936 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:34:32.668475  350936 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:34:32.668535  350936 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:34:32.670247  350936 out.go:235]   - Booting up control plane ...
	I1205 21:34:32.670356  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:34:32.670422  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:34:32.670498  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:34:32.670574  350936 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:34:32.670714  350936 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:34:32.670756  350936 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:34:32.670818  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:34:32.670993  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:34:32.671059  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:34:32.671216  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:34:32.671299  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:34:32.671481  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:34:32.671545  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:34:32.671702  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:34:32.671763  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:34:32.671926  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:34:32.671939  350936 kubeadm.go:310] 
	I1205 21:34:32.671976  350936 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:34:32.672017  350936 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:34:32.672024  350936 kubeadm.go:310] 
	I1205 21:34:32.672059  350936 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:34:32.672089  350936 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:34:32.672183  350936 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:34:32.672193  350936 kubeadm.go:310] 
	I1205 21:34:32.672278  350936 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:34:32.672317  350936 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:34:32.672348  350936 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:34:32.672355  350936 kubeadm.go:310] 
	I1205 21:34:32.672475  350936 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:34:32.672597  350936 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:34:32.672608  350936 kubeadm.go:310] 
	I1205 21:34:32.672708  350936 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:34:32.672784  350936 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:34:32.672859  350936 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:34:32.672923  350936 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:34:32.672940  350936 kubeadm.go:310] 
	W1205 21:34:32.673080  350936 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-601806] and IPs [192.168.61.123 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:34:32.673135  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:34:34.512628  350936 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.839457051s)
	I1205 21:34:34.512733  350936 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:34:34.527093  350936 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:34:34.537133  350936 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:34:34.537156  350936 kubeadm.go:157] found existing configuration files:
	
	I1205 21:34:34.537207  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:34:34.546640  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:34:34.546720  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:34:34.557081  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:34:34.566748  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:34:34.566819  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:34:34.577266  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:34:34.587472  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:34:34.587530  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:34:34.597264  350936 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:34:34.606695  350936 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:34:34.606757  350936 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:34:34.616411  350936 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:34:34.688392  350936 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:34:34.688463  350936 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:34:34.827256  350936 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:34:34.827430  350936 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:34:34.827590  350936 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:34:34.996263  350936 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:34:34.998089  350936 out.go:235]   - Generating certificates and keys ...
	I1205 21:34:34.998205  350936 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:34:34.998270  350936 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:34:34.998381  350936 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:34:34.998503  350936 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:34:34.998610  350936 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:34:34.998695  350936 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:34:34.998776  350936 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:34:34.998856  350936 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:34:34.998952  350936 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:34:34.999028  350936 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:34:34.999071  350936 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:34:34.999138  350936 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:34:35.212586  350936 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:34:35.311871  350936 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:34:35.435745  350936 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:34:35.568324  350936 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:34:35.582743  350936 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:34:35.583823  350936 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:34:35.583909  350936 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:34:35.723260  350936 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:34:35.726004  350936 out.go:235]   - Booting up control plane ...
	I1205 21:34:35.726114  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:34:35.733306  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:34:35.733837  350936 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:34:35.734727  350936 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:34:35.737024  350936 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:35:15.739491  350936 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:35:15.739927  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:35:15.740086  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:35:20.740768  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:35:20.740952  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:35:30.741351  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:35:30.741615  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:35:50.740934  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:35:50.741200  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:36:30.741125  350936 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:36:30.741344  350936 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:36:30.741397  350936 kubeadm.go:310] 
	I1205 21:36:30.741479  350936 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:36:30.741522  350936 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:36:30.741526  350936 kubeadm.go:310] 
	I1205 21:36:30.741555  350936 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:36:30.741600  350936 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:36:30.741748  350936 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:36:30.741762  350936 kubeadm.go:310] 
	I1205 21:36:30.741895  350936 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:36:30.741976  350936 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:36:30.742024  350936 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:36:30.742034  350936 kubeadm.go:310] 
	I1205 21:36:30.742183  350936 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:36:30.742317  350936 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:36:30.742329  350936 kubeadm.go:310] 
	I1205 21:36:30.742477  350936 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:36:30.742558  350936 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:36:30.742623  350936 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:36:30.742717  350936 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:36:30.742762  350936 kubeadm.go:310] 
	I1205 21:36:30.742931  350936 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:36:30.743096  350936 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:36:30.743253  350936 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:36:30.743274  350936 kubeadm.go:394] duration metric: took 3m56.868415556s to StartCluster
	I1205 21:36:30.743340  350936 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:36:30.743398  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:36:30.781130  350936 cri.go:89] found id: ""
	I1205 21:36:30.781163  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.781173  350936 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:36:30.781179  350936 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:36:30.781239  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:36:30.815577  350936 cri.go:89] found id: ""
	I1205 21:36:30.815609  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.815618  350936 logs.go:284] No container was found matching "etcd"
	I1205 21:36:30.815641  350936 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:36:30.815712  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:36:30.850723  350936 cri.go:89] found id: ""
	I1205 21:36:30.850754  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.850763  350936 logs.go:284] No container was found matching "coredns"
	I1205 21:36:30.850770  350936 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:36:30.850836  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:36:30.883699  350936 cri.go:89] found id: ""
	I1205 21:36:30.883733  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.883742  350936 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:36:30.883750  350936 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:36:30.883806  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:36:30.915121  350936 cri.go:89] found id: ""
	I1205 21:36:30.915150  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.915159  350936 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:36:30.915166  350936 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:36:30.915227  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:36:30.953026  350936 cri.go:89] found id: ""
	I1205 21:36:30.953058  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.953067  350936 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:36:30.953073  350936 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:36:30.953130  350936 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:36:30.986448  350936 cri.go:89] found id: ""
	I1205 21:36:30.986477  350936 logs.go:282] 0 containers: []
	W1205 21:36:30.986486  350936 logs.go:284] No container was found matching "kindnet"
	I1205 21:36:30.986498  350936 logs.go:123] Gathering logs for kubelet ...
	I1205 21:36:30.986520  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:36:31.039478  350936 logs.go:123] Gathering logs for dmesg ...
	I1205 21:36:31.039532  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:36:31.053559  350936 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:36:31.053592  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:36:31.199861  350936 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:36:31.199897  350936 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:36:31.199914  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:36:31.315704  350936 logs.go:123] Gathering logs for container status ...
	I1205 21:36:31.315757  350936 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1205 21:36:31.358378  350936 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:36:31.358457  350936 out.go:270] * 
	* 
	W1205 21:36:31.358525  350936 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:36:31.358545  350936 out.go:270] * 
	* 
	W1205 21:36:31.359473  350936 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:36:31.363296  350936 out.go:201] 
	W1205 21:36:31.365142  350936 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:36:31.365185  350936 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:36:31.365209  350936 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:36:31.367824  350936 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 6 (241.159176ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:31.664464  357490 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-601806" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (289.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-425614 --alsologtostderr -v=3
E1205 21:33:46.804953  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:46.811482  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:46.823049  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:46.844623  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:46.886121  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:46.967995  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:47.129832  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:47.451434  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:48.093350  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:49.374915  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:51.937267  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:33:57.059041  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:07.300403  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-425614 --alsologtostderr -v=3: exit status 82 (2m0.536695124s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-425614"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:33:35.921030  356272 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:33:35.921211  356272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:33:35.921224  356272 out.go:358] Setting ErrFile to fd 2...
	I1205 21:33:35.921232  356272 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:33:35.921551  356272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:33:35.921867  356272 out.go:352] Setting JSON to false
	I1205 21:33:35.922004  356272 mustload.go:65] Loading cluster: embed-certs-425614
	I1205 21:33:35.922590  356272 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:33:35.922703  356272 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:33:35.922931  356272 mustload.go:65] Loading cluster: embed-certs-425614
	I1205 21:33:35.923088  356272 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:33:35.923122  356272 stop.go:39] StopHost: embed-certs-425614
	I1205 21:33:35.923736  356272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:33:35.923807  356272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:33:35.939210  356272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35871
	I1205 21:33:35.939764  356272 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:33:35.940373  356272 main.go:141] libmachine: Using API Version  1
	I1205 21:33:35.940402  356272 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:33:35.940848  356272 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:33:35.943337  356272 out.go:177] * Stopping node "embed-certs-425614"  ...
	I1205 21:33:35.944695  356272 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 21:33:35.944725  356272 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:33:35.944968  356272 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 21:33:35.944999  356272 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:33:35.948073  356272 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:33:35.948520  356272 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:33:35.948546  356272 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:33:35.948749  356272 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:33:35.948958  356272 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:33:35.949118  356272 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:33:35.949260  356272 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:33:36.059581  356272 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 21:33:36.120116  356272 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 21:33:36.180929  356272 main.go:141] libmachine: Stopping "embed-certs-425614"...
	I1205 21:33:36.180964  356272 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:33:36.182796  356272 main.go:141] libmachine: (embed-certs-425614) Calling .Stop
	I1205 21:33:36.186619  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 0/120
	I1205 21:33:37.188672  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 1/120
	I1205 21:33:38.190237  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 2/120
	I1205 21:33:39.191835  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 3/120
	I1205 21:33:40.193459  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 4/120
	I1205 21:33:41.195477  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 5/120
	I1205 21:33:42.196764  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 6/120
	I1205 21:33:43.198674  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 7/120
	I1205 21:33:44.200011  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 8/120
	I1205 21:33:45.201475  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 9/120
	I1205 21:33:46.202818  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 10/120
	I1205 21:33:47.204387  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 11/120
	I1205 21:33:48.206271  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 12/120
	I1205 21:33:49.208401  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 13/120
	I1205 21:33:50.210234  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 14/120
	I1205 21:33:51.212451  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 15/120
	I1205 21:33:52.214151  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 16/120
	I1205 21:33:53.216890  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 17/120
	I1205 21:33:54.218433  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 18/120
	I1205 21:33:55.220711  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 19/120
	I1205 21:33:56.222677  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 20/120
	I1205 21:33:57.224797  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 21/120
	I1205 21:33:58.226589  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 22/120
	I1205 21:33:59.228604  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 23/120
	I1205 21:34:00.230318  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 24/120
	I1205 21:34:01.232435  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 25/120
	I1205 21:34:02.234050  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 26/120
	I1205 21:34:03.235445  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 27/120
	I1205 21:34:04.236937  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 28/120
	I1205 21:34:05.238706  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 29/120
	I1205 21:34:06.241151  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 30/120
	I1205 21:34:07.242964  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 31/120
	I1205 21:34:08.244858  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 32/120
	I1205 21:34:09.246671  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 33/120
	I1205 21:34:10.248482  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 34/120
	I1205 21:34:11.250596  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 35/120
	I1205 21:34:12.252585  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 36/120
	I1205 21:34:13.254121  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 37/120
	I1205 21:34:14.256494  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 38/120
	I1205 21:34:15.258385  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 39/120
	I1205 21:34:16.260404  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 40/120
	I1205 21:34:17.262449  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 41/120
	I1205 21:34:18.263929  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 42/120
	I1205 21:34:19.265390  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 43/120
	I1205 21:34:20.267185  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 44/120
	I1205 21:34:21.269098  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 45/120
	I1205 21:34:22.270639  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 46/120
	I1205 21:34:23.272403  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 47/120
	I1205 21:34:24.273854  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 48/120
	I1205 21:34:25.275519  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 49/120
	I1205 21:34:26.277895  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 50/120
	I1205 21:34:27.279421  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 51/120
	I1205 21:34:28.280981  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 52/120
	I1205 21:34:29.282465  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 53/120
	I1205 21:34:30.283936  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 54/120
	I1205 21:34:31.286420  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 55/120
	I1205 21:34:32.287891  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 56/120
	I1205 21:34:33.289480  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 57/120
	I1205 21:34:34.290952  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 58/120
	I1205 21:34:35.292706  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 59/120
	I1205 21:34:36.294095  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 60/120
	I1205 21:34:37.296645  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 61/120
	I1205 21:34:38.297974  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 62/120
	I1205 21:34:39.299460  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 63/120
	I1205 21:34:40.300907  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 64/120
	I1205 21:34:41.302874  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 65/120
	I1205 21:34:42.304148  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 66/120
	I1205 21:34:43.305561  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 67/120
	I1205 21:34:44.307029  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 68/120
	I1205 21:34:45.308793  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 69/120
	I1205 21:34:46.310378  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 70/120
	I1205 21:34:47.311863  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 71/120
	I1205 21:34:48.313355  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 72/120
	I1205 21:34:49.314904  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 73/120
	I1205 21:34:50.316282  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 74/120
	I1205 21:34:51.318456  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 75/120
	I1205 21:34:52.319734  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 76/120
	I1205 21:34:53.321251  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 77/120
	I1205 21:34:54.322600  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 78/120
	I1205 21:34:55.324330  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 79/120
	I1205 21:34:56.326515  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 80/120
	I1205 21:34:57.328115  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 81/120
	I1205 21:34:58.329701  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 82/120
	I1205 21:34:59.331134  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 83/120
	I1205 21:35:00.332544  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 84/120
	I1205 21:35:01.334705  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 85/120
	I1205 21:35:02.336202  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 86/120
	I1205 21:35:03.337997  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 87/120
	I1205 21:35:04.339402  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 88/120
	I1205 21:35:05.340879  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 89/120
	I1205 21:35:06.342567  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 90/120
	I1205 21:35:07.344073  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 91/120
	I1205 21:35:08.345543  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 92/120
	I1205 21:35:09.347009  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 93/120
	I1205 21:35:10.348525  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 94/120
	I1205 21:35:11.350965  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 95/120
	I1205 21:35:12.352472  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 96/120
	I1205 21:35:13.354164  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 97/120
	I1205 21:35:14.355601  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 98/120
	I1205 21:35:15.357100  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 99/120
	I1205 21:35:16.359420  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 100/120
	I1205 21:35:17.360881  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 101/120
	I1205 21:35:18.362329  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 102/120
	I1205 21:35:19.363902  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 103/120
	I1205 21:35:20.365378  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 104/120
	I1205 21:35:21.367871  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 105/120
	I1205 21:35:22.369323  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 106/120
	I1205 21:35:23.371004  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 107/120
	I1205 21:35:24.372427  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 108/120
	I1205 21:35:25.374085  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 109/120
	I1205 21:35:26.375957  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 110/120
	I1205 21:35:27.377568  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 111/120
	I1205 21:35:28.379006  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 112/120
	I1205 21:35:29.380704  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 113/120
	I1205 21:35:30.382244  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 114/120
	I1205 21:35:31.384678  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 115/120
	I1205 21:35:32.386196  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 116/120
	I1205 21:35:33.387634  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 117/120
	I1205 21:35:34.389121  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 118/120
	I1205 21:35:35.390934  356272 main.go:141] libmachine: (embed-certs-425614) Waiting for machine to stop 119/120
	I1205 21:35:36.392268  356272 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 21:35:36.392366  356272 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:35:36.394133  356272 out.go:201] 
	W1205 21:35:36.395495  356272 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 21:35:36.395526  356272 out.go:270] * 
	* 
	W1205 21:35:36.398960  356272 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:35:36.400385  356272 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-425614 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
E1205 21:35:37.107167  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.608404  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.615658  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.627097  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.648597  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.690161  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.771978  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:47.933859  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:48.255751  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:48.897957  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:50.180245  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:52.742101  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614: exit status 3 (18.45719803s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:35:54.858377  357086 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host
	E1205 21:35:54.858401  357086 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-425614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-500648 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-500648 --alsologtostderr -v=3: exit status 82 (2m0.504913477s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-500648"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:34:21.330202  356693 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:34:21.330345  356693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:34:21.330357  356693 out.go:358] Setting ErrFile to fd 2...
	I1205 21:34:21.330364  356693 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:34:21.330590  356693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:34:21.330874  356693 out.go:352] Setting JSON to false
	I1205 21:34:21.330978  356693 mustload.go:65] Loading cluster: no-preload-500648
	I1205 21:34:21.331389  356693 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:34:21.331473  356693 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:34:21.331668  356693 mustload.go:65] Loading cluster: no-preload-500648
	I1205 21:34:21.331795  356693 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:34:21.331831  356693 stop.go:39] StopHost: no-preload-500648
	I1205 21:34:21.332230  356693 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:34:21.332307  356693 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:34:21.349000  356693 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36051
	I1205 21:34:21.349549  356693 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:34:21.350211  356693 main.go:141] libmachine: Using API Version  1
	I1205 21:34:21.350240  356693 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:34:21.350648  356693 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:34:21.353080  356693 out.go:177] * Stopping node "no-preload-500648"  ...
	I1205 21:34:21.354633  356693 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 21:34:21.354664  356693 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:34:21.354911  356693 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 21:34:21.354959  356693 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:34:21.358113  356693 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:34:21.358589  356693 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:33:12 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:34:21.358622  356693 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:34:21.358757  356693 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:34:21.358972  356693 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:34:21.359158  356693 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:34:21.359351  356693 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:34:21.449162  356693 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 21:34:21.502326  356693 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 21:34:21.556213  356693 main.go:141] libmachine: Stopping "no-preload-500648"...
	I1205 21:34:21.556250  356693 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:34:21.558005  356693 main.go:141] libmachine: (no-preload-500648) Calling .Stop
	I1205 21:34:21.562376  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 0/120
	I1205 21:34:22.564238  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 1/120
	I1205 21:34:23.565961  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 2/120
	I1205 21:34:24.567507  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 3/120
	I1205 21:34:25.569043  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 4/120
	I1205 21:34:26.571441  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 5/120
	I1205 21:34:27.573116  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 6/120
	I1205 21:34:28.574839  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 7/120
	I1205 21:34:29.576192  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 8/120
	I1205 21:34:30.577721  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 9/120
	I1205 21:34:31.580372  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 10/120
	I1205 21:34:32.581824  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 11/120
	I1205 21:34:33.583565  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 12/120
	I1205 21:34:34.585110  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 13/120
	I1205 21:34:35.586371  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 14/120
	I1205 21:34:36.588586  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 15/120
	I1205 21:34:37.590176  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 16/120
	I1205 21:34:38.592600  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 17/120
	I1205 21:34:39.594130  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 18/120
	I1205 21:34:40.595612  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 19/120
	I1205 21:34:41.597775  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 20/120
	I1205 21:34:42.599400  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 21/120
	I1205 21:34:43.601529  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 22/120
	I1205 21:34:44.603114  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 23/120
	I1205 21:34:45.604755  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 24/120
	I1205 21:34:46.606978  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 25/120
	I1205 21:34:47.608780  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 26/120
	I1205 21:34:48.610291  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 27/120
	I1205 21:34:49.611679  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 28/120
	I1205 21:34:50.613212  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 29/120
	I1205 21:34:51.615771  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 30/120
	I1205 21:34:52.617242  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 31/120
	I1205 21:34:53.618853  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 32/120
	I1205 21:34:54.620419  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 33/120
	I1205 21:34:55.622424  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 34/120
	I1205 21:34:56.624574  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 35/120
	I1205 21:34:57.626014  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 36/120
	I1205 21:34:58.627773  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 37/120
	I1205 21:34:59.629271  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 38/120
	I1205 21:35:00.630903  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 39/120
	I1205 21:35:01.633222  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 40/120
	I1205 21:35:02.634581  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 41/120
	I1205 21:35:03.636068  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 42/120
	I1205 21:35:04.637766  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 43/120
	I1205 21:35:05.639452  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 44/120
	I1205 21:35:06.641854  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 45/120
	I1205 21:35:07.643438  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 46/120
	I1205 21:35:08.645154  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 47/120
	I1205 21:35:09.646578  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 48/120
	I1205 21:35:10.648177  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 49/120
	I1205 21:35:11.650463  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 50/120
	I1205 21:35:12.652015  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 51/120
	I1205 21:35:13.653429  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 52/120
	I1205 21:35:14.655006  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 53/120
	I1205 21:35:15.656630  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 54/120
	I1205 21:35:16.658705  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 55/120
	I1205 21:35:17.660155  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 56/120
	I1205 21:35:18.662028  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 57/120
	I1205 21:35:19.663483  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 58/120
	I1205 21:35:20.664997  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 59/120
	I1205 21:35:21.667466  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 60/120
	I1205 21:35:22.669010  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 61/120
	I1205 21:35:23.670580  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 62/120
	I1205 21:35:24.672132  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 63/120
	I1205 21:35:25.673823  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 64/120
	I1205 21:35:26.676115  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 65/120
	I1205 21:35:27.677764  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 66/120
	I1205 21:35:28.679391  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 67/120
	I1205 21:35:29.681127  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 68/120
	I1205 21:35:30.682637  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 69/120
	I1205 21:35:31.685101  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 70/120
	I1205 21:35:32.686674  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 71/120
	I1205 21:35:33.688372  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 72/120
	I1205 21:35:34.690291  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 73/120
	I1205 21:35:35.691910  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 74/120
	I1205 21:35:36.694420  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 75/120
	I1205 21:35:37.696024  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 76/120
	I1205 21:35:38.697837  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 77/120
	I1205 21:35:39.699371  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 78/120
	I1205 21:35:40.700996  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 79/120
	I1205 21:35:41.702568  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 80/120
	I1205 21:35:42.704075  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 81/120
	I1205 21:35:43.705674  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 82/120
	I1205 21:35:44.707270  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 83/120
	I1205 21:35:45.708992  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 84/120
	I1205 21:35:46.711140  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 85/120
	I1205 21:35:47.712583  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 86/120
	I1205 21:35:48.714325  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 87/120
	I1205 21:35:49.715846  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 88/120
	I1205 21:35:50.717743  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 89/120
	I1205 21:35:51.720021  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 90/120
	I1205 21:35:52.721668  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 91/120
	I1205 21:35:53.723323  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 92/120
	I1205 21:35:54.725062  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 93/120
	I1205 21:35:55.726891  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 94/120
	I1205 21:35:56.728432  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 95/120
	I1205 21:35:57.730048  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 96/120
	I1205 21:35:58.731979  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 97/120
	I1205 21:35:59.733675  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 98/120
	I1205 21:36:00.735315  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 99/120
	I1205 21:36:01.737231  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 100/120
	I1205 21:36:02.738861  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 101/120
	I1205 21:36:03.740498  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 102/120
	I1205 21:36:04.742360  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 103/120
	I1205 21:36:05.744687  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 104/120
	I1205 21:36:06.746710  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 105/120
	I1205 21:36:07.748659  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 106/120
	I1205 21:36:08.750221  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 107/120
	I1205 21:36:09.752828  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 108/120
	I1205 21:36:10.754746  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 109/120
	I1205 21:36:11.756406  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 110/120
	I1205 21:36:12.758185  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 111/120
	I1205 21:36:13.760007  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 112/120
	I1205 21:36:14.761627  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 113/120
	I1205 21:36:15.763462  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 114/120
	I1205 21:36:16.765941  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 115/120
	I1205 21:36:17.767633  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 116/120
	I1205 21:36:18.769305  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 117/120
	I1205 21:36:19.770985  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 118/120
	I1205 21:36:20.772966  356693 main.go:141] libmachine: (no-preload-500648) Waiting for machine to stop 119/120
	I1205 21:36:21.774614  356693 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 21:36:21.774687  356693 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:36:21.776779  356693 out.go:201] 
	W1205 21:36:21.778342  356693 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 21:36:21.778367  356693 out.go:270] * 
	* 
	W1205 21:36:21.781730  356693 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:36:21.783548  356693 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-500648 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
E1205 21:36:28.588090  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648: exit status 3 (18.641219841s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:40.426349  357396 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host
	E1205 21:36:40.426385  357396 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-500648" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-751353 --alsologtostderr -v=3
E1205 21:34:35.663811  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:56.145219  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:35:08.743630  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-751353 --alsologtostderr -v=3: exit status 82 (2m0.513677338s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-751353"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:34:28.912525  356779 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:34:28.912802  356779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:34:28.912812  356779 out.go:358] Setting ErrFile to fd 2...
	I1205 21:34:28.912816  356779 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:34:28.913046  356779 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:34:28.913324  356779 out.go:352] Setting JSON to false
	I1205 21:34:28.913423  356779 mustload.go:65] Loading cluster: default-k8s-diff-port-751353
	I1205 21:34:28.913848  356779 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:34:28.913958  356779 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:34:28.914150  356779 mustload.go:65] Loading cluster: default-k8s-diff-port-751353
	I1205 21:34:28.914281  356779 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:34:28.914320  356779 stop.go:39] StopHost: default-k8s-diff-port-751353
	I1205 21:34:28.914813  356779 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:34:28.914865  356779 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:34:28.930067  356779 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41387
	I1205 21:34:28.930652  356779 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:34:28.931318  356779 main.go:141] libmachine: Using API Version  1
	I1205 21:34:28.931342  356779 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:34:28.931672  356779 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:34:28.934032  356779 out.go:177] * Stopping node "default-k8s-diff-port-751353"  ...
	I1205 21:34:28.935151  356779 machine.go:156] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I1205 21:34:28.935189  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:34:28.935388  356779 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I1205 21:34:28.935412  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:34:28.938522  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:34:28.938908  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:33:37 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:34:28.938932  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:34:28.939083  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:34:28.939252  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:34:28.939434  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:34:28.939569  356779 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:34:29.037828  356779 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I1205 21:34:29.096933  356779 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I1205 21:34:29.157845  356779 main.go:141] libmachine: Stopping "default-k8s-diff-port-751353"...
	I1205 21:34:29.157879  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:34:29.159665  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Stop
	I1205 21:34:29.163916  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 0/120
	I1205 21:34:30.166364  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 1/120
	I1205 21:34:31.167837  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 2/120
	I1205 21:34:32.169416  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 3/120
	I1205 21:34:33.171054  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 4/120
	I1205 21:34:34.173584  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 5/120
	I1205 21:34:35.175001  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 6/120
	I1205 21:34:36.176528  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 7/120
	I1205 21:34:37.177947  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 8/120
	I1205 21:34:38.179306  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 9/120
	I1205 21:34:39.180804  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 10/120
	I1205 21:34:40.182573  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 11/120
	I1205 21:34:41.184604  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 12/120
	I1205 21:34:42.186125  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 13/120
	I1205 21:34:43.187552  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 14/120
	I1205 21:34:44.189882  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 15/120
	I1205 21:34:45.191320  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 16/120
	I1205 21:34:46.192891  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 17/120
	I1205 21:34:47.194295  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 18/120
	I1205 21:34:48.195930  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 19/120
	I1205 21:34:49.197337  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 20/120
	I1205 21:34:50.198925  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 21/120
	I1205 21:34:51.200320  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 22/120
	I1205 21:34:52.201838  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 23/120
	I1205 21:34:53.203250  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 24/120
	I1205 21:34:54.205533  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 25/120
	I1205 21:34:55.207101  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 26/120
	I1205 21:34:56.208514  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 27/120
	I1205 21:34:57.209930  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 28/120
	I1205 21:34:58.211434  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 29/120
	I1205 21:34:59.212939  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 30/120
	I1205 21:35:00.214418  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 31/120
	I1205 21:35:01.215883  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 32/120
	I1205 21:35:02.217523  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 33/120
	I1205 21:35:03.218987  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 34/120
	I1205 21:35:04.221603  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 35/120
	I1205 21:35:05.223279  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 36/120
	I1205 21:35:06.225060  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 37/120
	I1205 21:35:07.226683  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 38/120
	I1205 21:35:08.228072  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 39/120
	I1205 21:35:09.229652  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 40/120
	I1205 21:35:10.231320  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 41/120
	I1205 21:35:11.232799  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 42/120
	I1205 21:35:12.234266  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 43/120
	I1205 21:35:13.235822  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 44/120
	I1205 21:35:14.238078  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 45/120
	I1205 21:35:15.239641  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 46/120
	I1205 21:35:16.241247  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 47/120
	I1205 21:35:17.242973  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 48/120
	I1205 21:35:18.244517  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 49/120
	I1205 21:35:19.246315  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 50/120
	I1205 21:35:20.247912  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 51/120
	I1205 21:35:21.249529  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 52/120
	I1205 21:35:22.251109  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 53/120
	I1205 21:35:23.252854  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 54/120
	I1205 21:35:24.255319  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 55/120
	I1205 21:35:25.256773  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 56/120
	I1205 21:35:26.258396  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 57/120
	I1205 21:35:27.259729  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 58/120
	I1205 21:35:28.261309  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 59/120
	I1205 21:35:29.262955  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 60/120
	I1205 21:35:30.264507  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 61/120
	I1205 21:35:31.266068  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 62/120
	I1205 21:35:32.267703  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 63/120
	I1205 21:35:33.269189  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 64/120
	I1205 21:35:34.271652  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 65/120
	I1205 21:35:35.273284  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 66/120
	I1205 21:35:36.274989  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 67/120
	I1205 21:35:37.276462  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 68/120
	I1205 21:35:38.278315  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 69/120
	I1205 21:35:39.279981  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 70/120
	I1205 21:35:40.281420  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 71/120
	I1205 21:35:41.283047  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 72/120
	I1205 21:35:42.284535  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 73/120
	I1205 21:35:43.286100  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 74/120
	I1205 21:35:44.288227  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 75/120
	I1205 21:35:45.289758  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 76/120
	I1205 21:35:46.291132  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 77/120
	I1205 21:35:47.292649  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 78/120
	I1205 21:35:48.294200  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 79/120
	I1205 21:35:49.296589  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 80/120
	I1205 21:35:50.298122  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 81/120
	I1205 21:35:51.299665  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 82/120
	I1205 21:35:52.301173  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 83/120
	I1205 21:35:53.302688  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 84/120
	I1205 21:35:54.304938  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 85/120
	I1205 21:35:55.306503  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 86/120
	I1205 21:35:56.308190  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 87/120
	I1205 21:35:57.309742  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 88/120
	I1205 21:35:58.311207  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 89/120
	I1205 21:35:59.312800  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 90/120
	I1205 21:36:00.314691  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 91/120
	I1205 21:36:01.316348  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 92/120
	I1205 21:36:02.317846  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 93/120
	I1205 21:36:03.319399  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 94/120
	I1205 21:36:04.321742  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 95/120
	I1205 21:36:05.323589  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 96/120
	I1205 21:36:06.325273  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 97/120
	I1205 21:36:07.327013  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 98/120
	I1205 21:36:08.328543  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 99/120
	I1205 21:36:09.330436  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 100/120
	I1205 21:36:10.332131  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 101/120
	I1205 21:36:11.333880  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 102/120
	I1205 21:36:12.335368  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 103/120
	I1205 21:36:13.337234  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 104/120
	I1205 21:36:14.339464  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 105/120
	I1205 21:36:15.341568  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 106/120
	I1205 21:36:16.343200  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 107/120
	I1205 21:36:17.344837  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 108/120
	I1205 21:36:18.346406  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 109/120
	I1205 21:36:19.348031  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 110/120
	I1205 21:36:20.349786  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 111/120
	I1205 21:36:21.351301  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 112/120
	I1205 21:36:22.353117  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 113/120
	I1205 21:36:23.355002  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 114/120
	I1205 21:36:24.357514  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 115/120
	I1205 21:36:25.359333  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 116/120
	I1205 21:36:26.361009  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 117/120
	I1205 21:36:27.362779  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 118/120
	I1205 21:36:28.364457  356779 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for machine to stop 119/120
	I1205 21:36:29.365978  356779 stop.go:66] stop err: unable to stop vm, current state "Running"
	W1205 21:36:29.366061  356779 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I1205 21:36:29.368090  356779 out.go:201] 
	W1205 21:36:29.369703  356779 out.go:270] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W1205 21:36:29.369721  356779 out.go:270] * 
	* 
	W1205 21:36:29.372978  356779 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:36:29.374347  356779 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-751353 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
E1205 21:36:29.761274  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:29.767762  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:29.779311  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:29.800832  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:29.842319  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:29.924084  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:30.085701  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:30.407571  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:30.665625  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:31.049974  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:31.069970  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353: exit status 3 (18.474012566s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:47.850298  357442 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1205 21:36:47.850324  357442 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-751353" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (138.99s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
E1205 21:35:57.863886  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614: exit status 3 (3.199780275s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:35:58.058318  357168 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host
	E1205 21:35:58.058340  357168 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-425614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-425614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.155409555s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-425614 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614: exit status 3 (3.060079827s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:07.274424  357250 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host
	E1205 21:36:07.274448  357250 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.72.8:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-425614" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-601806 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-601806 create -f testdata/busybox.yaml: exit status 1 (47.211943ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-601806" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-601806 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 6 (243.140973ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:31.954717  357530 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-601806" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 6 (237.65155ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:32.193170  357560 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-601806" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.78s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-601806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 21:36:32.331363  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:34.894133  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:40.016347  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-601806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m37.486292625s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-601806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-601806 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-601806 describe deploy/metrics-server -n kube-system: exit status 1 (48.585727ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-601806" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-601806 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 6 (247.902545ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:38:09.975185  358221 status.go:458] kubeconfig endpoint: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-601806" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (97.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648: exit status 3 (3.199664149s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:43.626383  357654 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host
	E1205 21:36:43.626412  357654 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-500648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-500648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.152217473s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-500648 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
E1205 21:36:50.257819  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648: exit status 3 (3.063118963s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:52.842365  357766 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host
	E1205 21:36:52.842390  357766 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.50.141:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-500648" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
E1205 21:36:49.076416  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353: exit status 3 (3.199943531s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:36:51.050273  357734 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1205 21:36:51.050293  357734 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E1205 21:36:51.551934  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.153786289s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-751353 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
E1205 21:36:59.028541  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353: exit status 3 (3.062078677s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E1205 21:37:00.266455  357882 status.go:417] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host
	E1205 21:37:00.266482  357882 status.go:119] status error: NewSession: new client: new client: dial tcp 192.168.39.106:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-751353" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (704.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
E1205 21:38:16.320096  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:31.471995  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:41.836500  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:46.805343  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:54.437255  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:54.884263  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:39:13.623687  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:39:14.507007  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:39:15.167278  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:39:42.870988  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:40:03.758348  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:40:16.806606  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:40:47.609124  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:10.573530  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:15.314222  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:29.760983  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:38.279192  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:49.076835  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:41:57.466026  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:42:19.896620  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:42:32.945101  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:42:47.600161  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:43:00.648934  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:43:16.319247  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:43:46.805370  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:44:15.167891  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:44:39.396593  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:45:47.609242  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (11m40.470372996s)

                                                
                                                
-- stdout --
	* [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	* 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	* 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-601806 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (266.331805ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25: (1.658848996s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 21:49:57 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:57.983403394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435397983381967,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=70b94637-9da7-4f74-82e4-b04f7cabbf82 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:57 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:57.983937052Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a708ac70-9387-4274-99fb-9d8a0a614650 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:57 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:57.983984157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a708ac70-9387-4274-99fb-9d8a0a614650 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:57 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:57.984022122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a708ac70-9387-4274-99fb-9d8a0a614650 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.015075810Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9fdc4f70-2c23-4683-ac7b-bbd560c0d068 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.015181161Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9fdc4f70-2c23-4683-ac7b-bbd560c0d068 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.016394347Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8aa536cb-a4af-40c0-a048-43f643dbb906 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.016951820Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435398016920446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8aa536cb-a4af-40c0-a048-43f643dbb906 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.017618256Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a2277b91-a3e4-4d41-9027-4f30f4d738d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.017685535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a2277b91-a3e4-4d41-9027-4f30f4d738d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.017720426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=a2277b91-a3e4-4d41-9027-4f30f4d738d7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.049368381Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ed768d74-c483-4819-b9be-dcb139191a2b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.049455247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ed768d74-c483-4819-b9be-dcb139191a2b name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.051406720Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=de8c0026-5b4e-4e9e-886e-bcb70a75d3b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.051765053Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435398051741582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=de8c0026-5b4e-4e9e-886e-bcb70a75d3b3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.052548208Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c9e54ccc-3b08-4528-bf36-896a4f9c498d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.052609112Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c9e54ccc-3b08-4528-bf36-896a4f9c498d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.052644537Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c9e54ccc-3b08-4528-bf36-896a4f9c498d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.083218748Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=37f87813-1cd3-44f2-af78-ec80a7b5e3b1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.083315895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=37f87813-1cd3-44f2-af78-ec80a7b5e3b1 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.084683849Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=64d9c063-fb02-4336-a905-8f91fbb90d97 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.085081407Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435398085053635,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=64d9c063-fb02-4336-a905-8f91fbb90d97 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.085741349Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=734e9a41-6ece-40c4-aa9b-11b435606258 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.085811405Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=734e9a41-6ece-40c4-aa9b-11b435606258 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:49:58 old-k8s-version-601806 crio[631]: time="2024-12-05 21:49:58.085850078Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=734e9a41-6ece-40c4-aa9b-11b435606258 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049612] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037328] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041940] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591176] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000028] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.089329] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.075166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084879] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.248458] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.177247] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.251172] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361303] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.072375] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.856883] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Dec 5 21:42] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 21:45] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Dec 5 21:48] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.068423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:49:58 up 8 min,  0 users,  load average: 0.09, 0.10, 0.05
	Linux old-k8s-version-601806 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc000254fc0, 0xc0008a2ba0, 0x1, 0x0, 0x0)
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:492 +0xa5
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*http2Client).reader(0xc0008596c0)
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:1265 +0x179
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:300 +0xd31
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: goroutine 129 [select]:
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc0000518b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc0002946c0, 0x0, 0x0)
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0008596c0)
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5506]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Dec 05 21:49:55 old-k8s-version-601806 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 05 21:49:55 old-k8s-version-601806 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 21:49:55 old-k8s-version-601806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Dec 05 21:49:55 old-k8s-version-601806 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 21:49:55 old-k8s-version-601806 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5559]: I1205 21:49:55.846316    5559 server.go:416] Version: v1.20.0
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5559]: I1205 21:49:55.846607    5559 server.go:837] Client rotation is on, will bootstrap in background
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5559]: I1205 21:49:55.848605    5559 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5559]: W1205 21:49:55.849713    5559 manager.go:159] Cannot detect current cgroup on cgroup v2
	Dec 05 21:49:55 old-k8s-version-601806 kubelet[5559]: I1205 21:49:55.849847    5559 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (271.376748ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-601806" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (704.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 21:46:10.573536  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:46:29.761728  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:46:49.077081  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 21:55:06.707380016 +0000 UTC m=+5748.681972704
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-751353 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-751353 logs -n 25: (2.102975044s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.318977694Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18beceda-d82a-4696-a72f-4167ce55a1d2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.319449528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18beceda-d82a-4696-a72f-4167ce55a1d2 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.335073000Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=1552b0fc-7ee6-4085-974b-09724988dc4a name=/runtime.v1.RuntimeService/Status
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.335160989Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=1552b0fc-7ee6-4085-974b-09724988dc4a name=/runtime.v1.RuntimeService/Status
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.357568901Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=e89f1faf-188e-49b0-adbc-62a3244e7e7a name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.357684280Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e89f1faf-188e-49b0-adbc-62a3244e7e7a name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.358856597Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=963d6931-b607-4bd1-a2d1-4b1f0c9e174c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.359255544Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435708359230793,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=963d6931-b607-4bd1-a2d1-4b1f0c9e174c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.359833985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81ae5fac-08d4-4986-8392-e4a23f1ec921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.359923093Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81ae5fac-08d4-4986-8392-e4a23f1ec921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.360113554Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81ae5fac-08d4-4986-8392-e4a23f1ec921 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.397510320Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=08da044c-16c3-4f87-aacf-0c0ae208f896 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.397590627Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=08da044c-16c3-4f87-aacf-0c0ae208f896 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.398846132Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=898efb7f-cc4f-4d31-9f2c-9e4d3efc39fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.399242186Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435708399217582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=898efb7f-cc4f-4d31-9f2c-9e4d3efc39fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.400100560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cad2e222-4f1e-47c9-8284-bf714ffdecf9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.400177280Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cad2e222-4f1e-47c9-8284-bf714ffdecf9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.400373728Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cad2e222-4f1e-47c9-8284-bf714ffdecf9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.433234793Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=89e8337e-a6e0-46bc-9649-78e886b41039 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.433321247Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=89e8337e-a6e0-46bc-9649-78e886b41039 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.434374296Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c3f5f838-a137-4c83-a5e9-464473e5b8fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.434881394Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435708434856336,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c3f5f838-a137-4c83-a5e9-464473e5b8fd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.435460476Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bd002765-21e4-436c-95b4-5680e1209437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.435527204Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bd002765-21e4-436c-95b4-5680e1209437 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:55:08 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 21:55:08.435760824Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bd002765-21e4-436c-95b4-5680e1209437 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e62bb2ea8519       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   9d003a914dd5b       busybox
	d4ac290ffeedd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      13 minutes ago      Running             coredns                   1                   56e5a64605dbe       coredns-7c65d6cfc9-mll8z
	7befce79ea834       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Running             storage-provisioner       2                   8b1373a23f833       storage-provisioner
	37f783b4a3402       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   8b1373a23f833       storage-provisioner
	963fc5fe0f7ee       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      13 minutes ago      Running             kube-proxy                1                   20ab7bb2040ed       kube-proxy-b4ws4
	035df011d5399       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      13 minutes ago      Running             etcd                      1                   bb297f7199c47       etcd-default-k8s-diff-port-751353
	c0ddf1d7f97da       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      13 minutes ago      Running             kube-scheduler            1                   a1686895467fd       kube-scheduler-default-k8s-diff-port-751353
	079fc145d3515       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      13 minutes ago      Running             kube-apiserver            1                   e119343ecc82a       kube-apiserver-default-k8s-diff-port-751353
	807e6454204d4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      13 minutes ago      Running             kube-controller-manager   1                   09aebd00aa080       kube-controller-manager-default-k8s-diff-port-751353
	
	
	==> coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54520 - 38740 "HINFO IN 3167697831049979112.9156028796695991744. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023726293s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-751353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-751353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=default-k8s-diff-port-751353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_34_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:33:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-751353
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:55:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:52:19 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:52:19 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:52:19 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:52:19 +0000   Thu, 05 Dec 2024 21:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    default-k8s-diff-port-751353
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf8fa31002994e3bbd1b630b66bd1bb0
	  System UUID:                bf8fa310-0299-4e3b-bd1b-630b66bd1bb0
	  Boot ID:                    70c62d7e-3965-465e-be09-c9d4335900ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7c65d6cfc9-mll8z                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     21m
	  kube-system                 etcd-default-k8s-diff-port-751353                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-751353             250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-751353    200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-b4ws4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-751353             100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-6867b74b74-xb867                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeReady                21m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeReady
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-751353 event: Registered Node default-k8s-diff-port-751353 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-751353 event: Registered Node default-k8s-diff-port-751353 in Controller
	
	
	==> dmesg <==
	[Dec 5 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050400] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.032560] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.156592] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.574549] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.883258] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.072479] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070065] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.226647] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.133826] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.301970] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.175176] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +2.237863] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +0.066644] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.548187] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.928214] systemd-fstab-generator[1609]: Ignoring "noauto" option for root device
	[  +3.772023] kauditd_printk_skb: 69 callbacks suppressed
	
	
	==> etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] <==
	{"level":"info","ts":"2024-12-05T21:41:52.869814Z","caller":"traceutil/trace.go:171","msg":"trace[2138423579] linearizableReadLoop","detail":"{readStateIndex:656; appliedIndex:655; }","duration":"338.46893ms","start":"2024-12-05T21:41:52.531332Z","end":"2024-12-05T21:41:52.869801Z","steps":["trace[2138423579] 'read index received'  (duration: 338.2962ms)","trace[2138423579] 'applied index is now lower than readState.Index'  (duration: 172.205µs)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:52.870044Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"338.687466ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" ","response":"range_response_count:1 size:5919"}
	{"level":"info","ts":"2024-12-05T21:41:52.870193Z","caller":"traceutil/trace.go:171","msg":"trace[1773702547] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-751353; range_end:; response_count:1; response_revision:620; }","duration":"338.850445ms","start":"2024-12-05T21:41:52.531328Z","end":"2024-12-05T21:41:52.870178Z","steps":["trace[1773702547] 'agreement among raft nodes before linearized reading'  (duration: 338.643802ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:52.870257Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:52.531292Z","time spent":"338.950875ms","remote":"127.0.0.1:47084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5941,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" "}
	{"level":"info","ts":"2024-12-05T21:41:52.870341Z","caller":"traceutil/trace.go:171","msg":"trace[1254439089] transaction","detail":"{read_only:false; response_revision:620; number_of_response:1; }","duration":"395.188164ms","start":"2024-12-05T21:41:52.475141Z","end":"2024-12-05T21:41:52.870329Z","steps":["trace[1254439089] 'process raft request'  (duration: 394.52932ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:52.870882Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:52.475120Z","time spent":"395.263353ms","remote":"127.0.0.1:47084","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5904,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" mod_revision:494 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" value_size:5836 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" > >"}
	{"level":"warn","ts":"2024-12-05T21:41:53.833912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"553.022329ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T21:41:53.833989Z","caller":"traceutil/trace.go:171","msg":"trace[433622564] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:620; }","duration":"553.110878ms","start":"2024-12-05T21:41:53.280868Z","end":"2024-12-05T21:41:53.833979Z","steps":["trace[433622564] 'range keys from in-memory index tree'  (duration: 553.010776ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:53.834183Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"416.931158ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938279879332373158 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" mod_revision:620 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" value_size:5664 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T21:41:53.834250Z","caller":"traceutil/trace.go:171","msg":"trace[939979748] linearizableReadLoop","detail":"{readStateIndex:657; appliedIndex:656; }","duration":"802.641956ms","start":"2024-12-05T21:41:53.031594Z","end":"2024-12-05T21:41:53.834236Z","steps":["trace[939979748] 'read index received'  (duration: 385.460106ms)","trace[939979748] 'applied index is now lower than readState.Index'  (duration: 417.181117ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:53.834529Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"802.946603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" ","response":"range_response_count:1 size:5747"}
	{"level":"info","ts":"2024-12-05T21:41:53.834726Z","caller":"traceutil/trace.go:171","msg":"trace[1282633600] range","detail":"{range_begin:/registry/pods/kube-system/etcd-default-k8s-diff-port-751353; range_end:; response_count:1; response_revision:621; }","duration":"803.141339ms","start":"2024-12-05T21:41:53.031571Z","end":"2024-12-05T21:41:53.834712Z","steps":["trace[1282633600] 'agreement among raft nodes before linearized reading'  (duration: 802.884908ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:53.834805Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:53.031490Z","time spent":"803.302461ms","remote":"127.0.0.1:47084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":1,"response size":5769,"request content":"key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" "}
	{"level":"info","ts":"2024-12-05T21:41:53.834965Z","caller":"traceutil/trace.go:171","msg":"trace[1427262695] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"948.115695ms","start":"2024-12-05T21:41:52.886838Z","end":"2024-12-05T21:41:53.834954Z","steps":["trace[1427262695] 'process raft request'  (duration: 530.269593ms)","trace[1427262695] 'compare'  (duration: 416.656084ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:53.835456Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:52.886807Z","time spent":"948.591999ms","remote":"127.0.0.1:47084","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5732,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" mod_revision:620 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" value_size:5664 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" > >"}
	{"level":"warn","ts":"2024-12-05T21:41:53.835139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.646199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-12-05T21:41:53.838846Z","caller":"traceutil/trace.go:171","msg":"trace[1540693106] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:621; }","duration":"187.34144ms","start":"2024-12-05T21:41:53.651485Z","end":"2024-12-05T21:41:53.838827Z","steps":["trace[1540693106] 'agreement among raft nodes before linearized reading'  (duration: 183.628736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:54.355251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.353452ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938279879332373161 > lease_revoke:<id:17cc9398c568a125>","response":"size:27"}
	{"level":"info","ts":"2024-12-05T21:41:54.355391Z","caller":"traceutil/trace.go:171","msg":"trace[1499642662] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:657; }","duration":"514.177155ms","start":"2024-12-05T21:41:53.841195Z","end":"2024-12-05T21:41:54.355372Z","steps":["trace[1499642662] 'read index received'  (duration: 126.614015ms)","trace[1499642662] 'applied index is now lower than readState.Index'  (duration: 387.559553ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:54.355691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"514.424998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-751353\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-12-05T21:41:54.355749Z","caller":"traceutil/trace.go:171","msg":"trace[1702041016] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-751353; range_end:; response_count:1; response_revision:621; }","duration":"514.543024ms","start":"2024-12-05T21:41:53.841192Z","end":"2024-12-05T21:41:54.355735Z","steps":["trace[1702041016] 'agreement among raft nodes before linearized reading'  (duration: 514.332762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:54.355787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:53.841159Z","time spent":"514.617056ms","remote":"127.0.0.1:47068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5559,"request content":"key:\"/registry/minions/default-k8s-diff-port-751353\" "}
	{"level":"info","ts":"2024-12-05T21:51:36.024039Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-12-05T21:51:36.033821Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":874,"took":"9.602281ms","hash":2051551245,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-12-05T21:51:36.033875Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2051551245,"revision":874,"compact-revision":-1}
	
	
	==> kernel <==
	 21:55:08 up 13 min,  0 users,  load average: 0.00, 0.05, 0.08
	Linux default-k8s-diff-port-751353 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:51:38.309185       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:51:38.309312       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:51:38.310262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:51:38.311388       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:52:38.311215       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:52:38.311439       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:52:38.311573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:52:38.311684       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:52:38.313387       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:52:38.313499       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:54:38.314444       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:54:38.314595       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 21:54:38.314526       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:54:38.314787       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 21:54:38.315845       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:54:38.315910       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] <==
	E1205 21:49:40.730832       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:49:41.403392       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:50:10.735916       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:50:11.411150       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:50:40.742191       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:50:41.418909       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:51:10.747690       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:11.426432       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:51:40.753873       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:41.432850       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:52:10.759943       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:11.439930       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:52:19.405584       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-751353"
	E1205 21:52:40.767035       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:41.447882       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:52:56.126242       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="98.096µs"
	I1205 21:53:10.126326       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="356.873µs"
	E1205 21:53:10.773510       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:11.454736       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:53:40.780016       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:41.462353       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:10.787086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:11.469451       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:40.792870       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:41.476144       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:41:38.780116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:41:38.794762       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E1205 21:41:38.795855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:41:38.862879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:41:38.862966       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:41:38.863000       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:41:38.870238       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:41:38.870494       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:41:38.870518       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:41:38.876992       1 config.go:199] "Starting service config controller"
	I1205 21:41:38.877046       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:41:38.877092       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:41:38.877109       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:41:38.885711       1 config.go:328] "Starting node config controller"
	I1205 21:41:38.885745       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:41:38.977290       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:41:38.977303       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:41:38.986437       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] <==
	I1205 21:41:35.076157       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:41:37.252913       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 21:41:37.252993       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 21:41:37.253003       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:41:37.253009       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:41:37.310536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:41:37.310577       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:41:37.317842       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 21:41:37.320706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 21:41:37.320773       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:41:37.320810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 21:41:37.421738       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 21:54:03 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:03.259745     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435643259159075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:03 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:03.259797     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435643259159075,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:04 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:04.112884     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 21:54:13 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:13.261924     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435653260794097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:13 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:13.262513     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435653260794097,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:16 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:16.112490     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 21:54:23 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:23.265824     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435663264852757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:23 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:23.265879     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435663264852757,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:27 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:27.112792     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:33.126824     938 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:33.268035     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435673267183696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:33 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:33.268159     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435673267183696,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:40 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:40.112014     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 21:54:43 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:43.270838     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435683270380925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:43 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:43.271707     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435683270380925,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:53 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:53.114066     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 21:54:53 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:53.273092     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435693272771180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:54:53 default-k8s-diff-port-751353 kubelet[938]: E1205 21:54:53.273147     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435693272771180,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:03 default-k8s-diff-port-751353 kubelet[938]: E1205 21:55:03.274698     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435703274056708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:03 default-k8s-diff-port-751353 kubelet[938]: E1205 21:55:03.275062     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435703274056708,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:07 default-k8s-diff-port-751353 kubelet[938]: E1205 21:55:07.113077     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	
	
	==> storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] <==
	I1205 21:41:38.716764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 21:41:38.718483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] <==
	I1205 21:41:39.255930       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:41:39.280708       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:41:39.280788       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:41:56.871247       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:41:56.871456       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386!
	I1205 21:41:56.873150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4dde28b-76d4-40f2-9ca4-c00393ecc5f1", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386 became leader
	I1205 21:41:56.972693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xb867
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867: exit status 1 (99.226065ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xb867" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.5s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500648 -n no-preload-500648
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 21:56:09.669057435 +0000 UTC m=+5811.643650097
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-500648 logs -n 25
E1205 21:56:10.574174  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-500648 logs -n 25: (2.179616339s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.344725886Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6c050739-4d37-462c-85c0-aee9089f0342 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.345715736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=39aa8430-c796-41a9-a516-cb63879f7b2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.346190155Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435771346165838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=39aa8430-c796-41a9-a516-cb63879f7b2b name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.346683198Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2c9225e0-59f1-4b17-9bc4-f303b99e335d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.346738242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2c9225e0-59f1-4b17-9bc4-f303b99e335d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.346994642Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2c9225e0-59f1-4b17-9bc4-f303b99e335d name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.380896191Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=97e1e4e5-19d3-4358-b9a5-48f45f56bac4 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.380974629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=97e1e4e5-19d3-4358-b9a5-48f45f56bac4 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.382418648Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f73f8e2e-24b5-4828-805e-dc7830227989 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.382752898Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435771382730053,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f73f8e2e-24b5-4828-805e-dc7830227989 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.383292855Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9bae711b-e5a2-44e1-b938-b4fb9f8658be name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.383346947Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9bae711b-e5a2-44e1-b938-b4fb9f8658be name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.383546064Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=9bae711b-e5a2-44e1-b938-b4fb9f8658be name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.401548083Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a90fe15d-5189-4a3d-8f26-0cac8f73941d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.401799519Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:15490fb312b07ccea32025a4a72c58459b6d4e9a5bb3597f12e089e98a5ec391,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-ftmzl,Uid:c541d531-1622-4528-af4c-f6147f47e8f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435218045032914,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-ftmzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c541d531-1622-4528-af4c-f6147f47e8f5,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:57.724415921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmd2t,Uid:e3e98611-66c3-4647-8870-bff5ff6ec5
96,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435218012736751,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.798956893Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gw87,Uid:5551f12d-28e2-4abc-aa12-df5e94a50df9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435217981284804,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.769074890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:62bd3876-3f92-4cc1-9e07-860628e8a746,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435217927413669,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T21:46:57.620490301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&PodSandboxMetadata{Name:kube-proxy-98xqk,Uid:4b383ba3-46c2-45df-9035-270593e44817,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435216952628137,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.633085062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-500648,Uid:8f9ba4fbfce2011ed6b44c9b7b199059,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733435206179383805,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.141:8443,kubernetes.io/config.hash: 8f9ba4fbfce2011ed6b44c9b7b199059,kubernetes.io/config.seen: 2024-12-05T21:46:45.717381589Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6cdc57b59a4dd01b027825e47904136
98cfaf9c8b274b304d325f689d39ba9e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-500648,Uid:3a2b3c191ea04e6e57d1e374543e8cd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206173949548,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3a2b3c191ea04e6e57d1e374543e8cd8,kubernetes.io/config.seen: 2024-12-05T21:46:45.717383194Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-500648,Uid:74f0143772b58305ead4f000b0489269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206172779604,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74f0143772b58305ead4f000b0489269,kubernetes.io/config.seen: 2024-12-05T21:46:45.717384435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-500648,Uid:534df59254648301964f51a82b53e9f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206145850157,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.141:237
9,kubernetes.io/config.hash: 534df59254648301964f51a82b53e9f5,kubernetes.io/config.seen: 2024-12-05T21:46:45.717377794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a90fe15d-5189-4a3d-8f26-0cac8f73941d name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.401553999Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:57.724415921Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402096482Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:27" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402194572Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402239487Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:97" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402276563Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:111" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402299711Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:33" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.402329727Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c13007ad-2aa8-4af6-bd6d-ca272737492e name=/runtime.v1.ImageService/ImageStatus
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.403734432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=827aeadc-58ae-4bf7-97dc-f57eb8cab4d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.403831480Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=827aeadc-58ae-4bf7-97dc-f57eb8cab4d5 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:11 no-preload-500648 crio[687]: time="2024-12-05 21:56:11.404073085Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=827aeadc-58ae-4bf7-97dc-f57eb8cab4d5 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	087e0b2f7a7df       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   012977c74613b       coredns-7c65d6cfc9-6gw87
	9d95068628f7a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   538515779b791       coredns-7c65d6cfc9-tmd2t
	fe59d0c476ff3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   361a95cd215fb       storage-provisioner
	5d868f29f315f       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   52dc5528b975b       kube-proxy-98xqk
	76e8165328f38       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            3                   1c91ea3707be6       kube-apiserver-no-preload-500648
	3bf7c136a1c6a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   3                   6cdc57b59a4dd       kube-controller-manager-no-preload-500648
	b609bf884c7b1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   bae9f949b109c       kube-scheduler-no-preload-500648
	891d12f2aecd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   e573fc3e5a5a8       etcd-no-preload-500648
	19c9db624e744       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            2                   7195904c388be       kube-apiserver-no-preload-500648
	
	
	==> coredns [087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-500648
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-500648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=no-preload-500648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:46:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-500648
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:56:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:52:08 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:52:08 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:52:08 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:52:08 +0000   Thu, 05 Dec 2024 21:46:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.141
	  Hostname:    no-preload-500648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428b18567a3a4babac6b0eb6f1fd7e37
	  System UUID:                428b1856-7a3a-4bab-ac6b-0eb6f1fd7e37
	  Boot ID:                    c82a09e6-d6b6-43e4-a4ca-e1582e96988f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6gw87                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 coredns-7c65d6cfc9-tmd2t                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m15s
	  kube-system                 etcd-no-preload-500648                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m20s
	  kube-system                 kube-apiserver-no-preload-500648             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-controller-manager-no-preload-500648    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 kube-proxy-98xqk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  kube-system                 kube-scheduler-no-preload-500648             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m20s
	  kube-system                 metrics-server-6867b74b74-ftmzl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m14s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m14s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m13s  kube-proxy       
	  Normal  Starting                 9m20s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m20s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m20s  kubelet          Node no-preload-500648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m20s  kubelet          Node no-preload-500648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m20s  kubelet          Node no-preload-500648 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m16s  node-controller  Node no-preload-500648 event: Registered Node no-preload-500648 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049394] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037291] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.019629] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 21:41] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.139573] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.197240] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.119071] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.293727] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[ +15.291396] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.063546] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.980694] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[ +22.448628] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 5 21:42] kauditd_printk_skb: 93 callbacks suppressed
	[Dec 5 21:46] systemd-fstab-generator[3190]: Ignoring "noauto" option for root device
	[  +0.059367] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.987561] systemd-fstab-generator[3516]: Ignoring "noauto" option for root device
	[  +0.081296] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.300160] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.136640] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 5 21:47] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d] <==
	{"level":"info","ts":"2024-12-05T21:46:46.603814Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T21:46:46.603938Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.50.141:2380"}
	{"level":"info","ts":"2024-12-05T21:46:46.603971Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.50.141:2380"}
	{"level":"info","ts":"2024-12-05T21:46:46.604847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 switched to configuration voters=(8644529645817218176)"}
	{"level":"info","ts":"2024-12-05T21:46:46.604990Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"2bba191a4e9d4ee","local-member-id":"77f789e98544c480","added-peer-id":"77f789e98544c480","added-peer-peer-urls":["https://192.168.50.141:2380"]}
	{"level":"info","ts":"2024-12-05T21:46:47.573840Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T21:46:47.573950Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T21:46:47.573969Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 received MsgPreVoteResp from 77f789e98544c480 at term 1"}
	{"level":"info","ts":"2024-12-05T21:46:47.573980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.573986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 received MsgVoteResp from 77f789e98544c480 at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.573995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.574003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 77f789e98544c480 elected leader 77f789e98544c480 at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.575639Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.576559Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"77f789e98544c480","local-member-attributes":"{Name:no-preload-500648 ClientURLs:[https://192.168.50.141:2379]}","request-path":"/0/members/77f789e98544c480/attributes","cluster-id":"2bba191a4e9d4ee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:46:47.576762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:46:47.576972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bba191a4e9d4ee","local-member-id":"77f789e98544c480","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577070Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577110Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577121Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:46:47.578251Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:46:47.579274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.141:2379"}
	{"level":"info","ts":"2024-12-05T21:46:47.579840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:46:47.579921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:46:47.586917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:46:47.587615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 21:56:11 up 15 min,  0 users,  load average: 0.01, 0.20, 0.20
	Linux no-preload-500648 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0] <==
	W1205 21:46:42.464455       1 logging.go:55] [core] [Channel #120 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.540579       1 logging.go:55] [core] [Channel #168 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.586724       1 logging.go:55] [core] [Channel #48 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.607286       1 logging.go:55] [core] [Channel #165 SubChannel #166]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.607535       1 logging.go:55] [core] [Channel #117 SubChannel #118]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.612223       1 logging.go:55] [core] [Channel #114 SubChannel #115]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.634792       1 logging.go:55] [core] [Channel #126 SubChannel #127]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.675187       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.676566       1 logging.go:55] [core] [Channel #75 SubChannel #76]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.680165       1 logging.go:55] [core] [Channel #66 SubChannel #67]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.685029       1 logging.go:55] [core] [Channel #147 SubChannel #148]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.753777       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.832340       1 logging.go:55] [core] [Channel #111 SubChannel #112]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.989135       1 logging.go:55] [core] [Channel #159 SubChannel #160]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.158163       1 logging.go:55] [core] [Channel #45 SubChannel #46]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.180718       1 logging.go:55] [core] [Channel #60 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.262393       1 logging.go:55] [core] [Channel #84 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.294317       1 logging.go:55] [core] [Channel #132 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.300667       1 logging.go:55] [core] [Channel #102 SubChannel #103]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.339262       1 logging.go:55] [core] [Channel #72 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.354153       1 logging.go:55] [core] [Channel #156 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.382969       1 logging.go:55] [core] [Channel #162 SubChannel #163]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.421228       1 logging.go:55] [core] [Channel #81 SubChannel #82]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.498078       1 logging.go:55] [core] [Channel #87 SubChannel #88]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.589678       1 logging.go:55] [core] [Channel #54 SubChannel #55]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:51:49.901715       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:51:49.901791       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:51:49.902747       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:51:49.903914       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:52:49.903486       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:52:49.903654       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:52:49.904613       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:52:49.904669       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:52:49.904705       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:52:49.905772       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:54:49.905732       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:54:49.905937       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:54:49.906050       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:54:49.906099       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:54:49.907934       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:54:49.907992       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1] <==
	E1205 21:50:55.904198       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:50:56.339165       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:51:25.910317       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:26.347233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:51:55.915976       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:56.354543       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:52:08.435770       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-500648"
	E1205 21:52:25.922342       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:26.373025       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:52:55.927775       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:56.380442       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:53:00.415981       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="139.224µs"
	I1205 21:53:15.424983       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="143.429µs"
	E1205 21:53:25.934982       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:26.388483       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:53:55.941574       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:56.395772       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:25.946817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:26.407598       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:55.953750       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:56.415684       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:55:25.960197       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:55:26.423915       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:55:55.966086       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:55:56.431237       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:46:57.643170       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:46:57.659523       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.141"]
	E1205 21:46:57.659604       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:46:57.886173       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:46:57.886228       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:46:57.886258       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:46:57.898547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:46:57.899094       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:46:57.899220       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:46:57.900618       1 config.go:199] "Starting service config controller"
	I1205 21:46:57.900806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:46:57.900906       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:46:57.900912       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:46:57.903416       1 config.go:328] "Starting node config controller"
	I1205 21:46:57.903502       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:46:58.001939       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:46:58.001970       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:46:58.003609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff] <==
	W1205 21:46:49.857734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:49.857831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.861964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:49.862164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.914958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:46:49.915222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.979126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 21:46:49.979177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.000819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 21:46:50.000897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.011644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 21:46:50.011693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.063469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 21:46:50.063543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.184652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 21:46:50.184707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.196163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 21:46:50.196212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.221088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 21:46:50.221191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.242284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:50.243482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.399835       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 21:46:50.399915       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 21:46:53.369563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 21:55:07 no-preload-500648 kubelet[3523]: E1205 21:55:07.402187    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:55:11 no-preload-500648 kubelet[3523]: E1205 21:55:11.570248    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435711568771433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:11 no-preload-500648 kubelet[3523]: E1205 21:55:11.570827    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435711568771433,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:20 no-preload-500648 kubelet[3523]: E1205 21:55:20.401173    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:55:21 no-preload-500648 kubelet[3523]: E1205 21:55:21.572447    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435721572154618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:21 no-preload-500648 kubelet[3523]: E1205 21:55:21.572704    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435721572154618,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:31 no-preload-500648 kubelet[3523]: E1205 21:55:31.402126    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:55:31 no-preload-500648 kubelet[3523]: E1205 21:55:31.573765    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435731573539095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:31 no-preload-500648 kubelet[3523]: E1205 21:55:31.573807    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435731573539095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:41 no-preload-500648 kubelet[3523]: E1205 21:55:41.576849    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435741576302673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:41 no-preload-500648 kubelet[3523]: E1205 21:55:41.576939    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435741576302673,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:45 no-preload-500648 kubelet[3523]: E1205 21:55:45.401444    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]: E1205 21:55:51.413218    3523 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]: E1205 21:55:51.578421    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435751578097154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:51 no-preload-500648 kubelet[3523]: E1205 21:55:51.578516    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435751578097154,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:58 no-preload-500648 kubelet[3523]: E1205 21:55:58.400424    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:56:01 no-preload-500648 kubelet[3523]: E1205 21:56:01.580677    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435761580379920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:01 no-preload-500648 kubelet[3523]: E1205 21:56:01.580734    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435761580379920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:11 no-preload-500648 kubelet[3523]: E1205 21:56:11.402587    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 21:56:11 no-preload-500648 kubelet[3523]: E1205 21:56:11.584466    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435771583935928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:11 no-preload-500648 kubelet[3523]: E1205 21:56:11.584529    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435771583935928,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609] <==
	I1205 21:46:58.216230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:46:58.240052       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:46:58.240213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:46:58.261928       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:46:58.264030       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88!
	I1205 21:46:58.276635       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7539352f-500b-4e33-8dbf-9d5c2a6bcc60", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88 became leader
	I1205 21:46:58.364511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500648 -n no-preload-500648
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-500648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-ftmzl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl: exit status 1 (65.860044ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-ftmzl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.53s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1205 21:47:32.944996  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:48:16.319243  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:48:46.805617  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:49:15.166955  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425614 -n embed-certs-425614
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-12-05 21:56:20.2384008 +0000 UTC m=+5822.212993461
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-425614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-425614 logs -n 25: (2.132499384s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.826795444Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435781826771948,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=000ad7e7-4bcd-47ad-909c-a8190fe0395c name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.827224496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6d52da7d-57ab-4704-8c19-37ac6fd8b35b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.827295182Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6d52da7d-57ab-4704-8c19-37ac6fd8b35b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.827586741Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6d52da7d-57ab-4704-8c19-37ac6fd8b35b name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.866665669Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=faf78500-7d59-4a6f-a152-67ac74a95177 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.866796506Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=faf78500-7d59-4a6f-a152-67ac74a95177 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.868175354Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d1a2ecd4-65bc-421f-9642-7bba79cb48ed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.868668389Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435781868640897,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d1a2ecd4-65bc-421f-9642-7bba79cb48ed name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.869321872Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2a7a8197-0d80-405c-a846-2961a7a88884 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.869401842Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2a7a8197-0d80-405c-a846-2961a7a88884 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.869722463Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2a7a8197-0d80-405c-a846-2961a7a88884 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.914624969Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58d7db5e-87e4-4e40-8995-559bca6a7b72 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.914737297Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58d7db5e-87e4-4e40-8995-559bca6a7b72 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.916128087Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a54925c7-30be-408e-9c9b-04c03c5d2bad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.916672796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435781916646774,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a54925c7-30be-408e-9c9b-04c03c5d2bad name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.917462924Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f34a19d0-0511-45f4-aad2-3e8981bb0a06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.917647297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f34a19d0-0511-45f4-aad2-3e8981bb0a06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.917954095Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f34a19d0-0511-45f4-aad2-3e8981bb0a06 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.956199797Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6f175258-c035-440b-afd2-8e1375b37a1a name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.956301806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6f175258-c035-440b-afd2-8e1375b37a1a name=/runtime.v1.RuntimeService/Version
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.957467201Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=dcfb5973-9124-4a98-ab5a-053ddf2e875e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.957973358Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435781957946653,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=dcfb5973-9124-4a98-ab5a-053ddf2e875e name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.959998785Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=5f60047c-c876-4812-aa03-b86105b85246 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.960085087Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=5f60047c-c876-4812-aa03-b86105b85246 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:56:21 embed-certs-425614 crio[708]: time="2024-12-05 21:56:21.960340271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=5f60047c-c876-4812-aa03-b86105b85246 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7935ff19768       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   d5e243e965acf       storage-provisioner
	71f3be3721620       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   fafdcfea82d69       coredns-7c65d6cfc9-qfwx8
	4922644ed14ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   9 minutes ago       Running             coredns                   0                   7ce18d6bd83c9       coredns-7c65d6cfc9-7sjzc
	78e33cb0841af       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   9 minutes ago       Running             kube-proxy                0                   0be1e5092b64f       kube-proxy-k2zgx
	25bc947c71adb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   9 minutes ago       Running             etcd                      2                   aa4d76d81862e       etcd-embed-certs-425614
	90f378c3660c9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   9 minutes ago       Running             kube-scheduler            2                   4a099a777b0a1       kube-scheduler-embed-certs-425614
	0105f87b7ed06       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   9 minutes ago       Running             kube-controller-manager   2                   e9e8a7a7ebd9a       kube-controller-manager-embed-certs-425614
	2a0af9adef57f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   9 minutes ago       Running             kube-apiserver            2                   18868d3172717       kube-apiserver-embed-certs-425614
	72477542a2b88       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   14 minutes ago      Exited              kube-apiserver            1                   57665c1ccb34c       kube-apiserver-embed-certs-425614
	
	
	==> coredns [4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-425614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-425614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=embed-certs-425614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:47:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-425614
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 21:56:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:52:20 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:52:20 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:52:20 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:52:20 +0000   Thu, 05 Dec 2024 21:47:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.8
	  Hostname:    embed-certs-425614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e751443b3fc8433d85c1d5953930bbb4
	  System UUID:                e751443b-3fc8-433d-85c1-d5953930bbb4
	  Boot ID:                    647179da-dc18-4dc7-95ed-bd4273f33f8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7sjzc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 coredns-7c65d6cfc9-qfwx8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m13s
	  kube-system                 etcd-embed-certs-425614                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m18s
	  kube-system                 kube-apiserver-embed-certs-425614             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 kube-controller-manager-embed-certs-425614    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m19s
	  kube-system                 kube-proxy-k2zgx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m13s
	  kube-system                 kube-scheduler-embed-certs-425614             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m18s
	  kube-system                 metrics-server-6867b74b74-hghhs               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         9m12s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m10s  kube-proxy       
	  Normal  Starting                 9m18s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m18s  kubelet          Node embed-certs-425614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m18s  kubelet          Node embed-certs-425614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s  kubelet          Node embed-certs-425614 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m14s  node-controller  Node embed-certs-425614 event: Registered Node embed-certs-425614 in Controller
	
	
	==> dmesg <==
	[  +0.041669] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.118267] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.097774] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.452311] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 21:42] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.060768] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066578] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.237585] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.142392] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.319410] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.285225] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +0.081250] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.035937] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +4.730681] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.304809] kauditd_printk_skb: 59 callbacks suppressed
	[Dec 5 21:46] kauditd_printk_skb: 31 callbacks suppressed
	[ +26.137600] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.638714] systemd-fstab-generator[2597]: Ignoring "noauto" option for root device
	[Dec 5 21:47] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.507985] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +5.462260] systemd-fstab-generator[3031]: Ignoring "noauto" option for root device
	[  +0.093422] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.154043] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f] <==
	{"level":"info","ts":"2024-12-05T21:46:59.695521Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-05T21:46:59.695797Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"83b24b436960d93","initial-advertise-peer-urls":["https://192.168.72.8:2380"],"listen-peer-urls":["https://192.168.72.8:2380"],"advertise-client-urls":["https://192.168.72.8:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.8:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-05T21:46:59.695832Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-05T21:46:59.696307Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.8:2380"}
	{"level":"info","ts":"2024-12-05T21:46:59.696354Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.8:2380"}
	{"level":"info","ts":"2024-12-05T21:47:00.139629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 received MsgPreVoteResp from 83b24b436960d93 at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 received MsgVoteResp from 83b24b436960d93 at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83b24b436960d93 elected leader 83b24b436960d93 at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.143348Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.145853Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"83b24b436960d93","local-member-attributes":"{Name:embed-certs-425614 ClientURLs:[https://192.168.72.8:2379]}","request-path":"/0/members/83b24b436960d93/attributes","cluster-id":"f6e6242805c6c4ee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:47:00.146661Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f6e6242805c6c4ee","local-member-id":"83b24b436960d93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146779Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:47:00.147045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:47:00.147764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:47:00.148457Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.8:2379"}
	{"level":"info","ts":"2024-12-05T21:47:00.149124Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:47:00.149813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:47:00.151007Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:47:00.151041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 21:56:22 up 14 min,  0 users,  load average: 0.17, 0.16, 0.15
	Linux embed-certs-425614 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4] <==
	W1205 21:52:02.667425       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:52:02.667744       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 21:52:02.668797       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:52:02.668827       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:53:02.669925       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 21:53:02.669924       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:53:02.670097       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 21:53:02.670129       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 21:53:02.671283       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:53:02.671362       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:55:02.672356       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:55:02.672425       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 21:55:02.672465       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:55:02.672515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 21:55:02.673686       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:55:02.673749       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c] <==
	W1205 21:46:53.886503       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.902538       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.913401       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.928316       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.961743       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.025967       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.030621       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.077222       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.098367       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.104166       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.173181       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.186877       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.433184       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.467719       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.479538       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.491303       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.512749       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.549357       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.621658       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.667482       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.686908       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.726321       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.752648       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.851865       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:55.036630       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd] <==
	E1205 21:51:08.657506       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:09.116534       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:51:38.663015       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:51:39.126644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:52:08.668743       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:09.137470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:52:20.626301       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-425614"
	E1205 21:52:38.675309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:52:39.144967       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:53:00.328106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="83.706µs"
	E1205 21:53:08.680969       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:09.152297       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:53:12.329918       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="107.917µs"
	E1205 21:53:38.687194       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:53:39.159717       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:08.695411       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:09.167671       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:54:38.701756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:54:39.175186       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:55:08.707820       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:55:09.184611       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:55:38.713782       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:55:39.191446       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:56:08.720413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:56:09.198604       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:47:11.556250       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:47:11.565326       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.8"]
	E1205 21:47:11.565467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:47:11.600466       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:47:11.600646       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:47:11.600693       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:47:11.610275       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:47:11.610666       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:47:11.611027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:47:11.612589       1 config.go:199] "Starting service config controller"
	I1205 21:47:11.612664       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:47:11.612724       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:47:11.612750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:47:11.613317       1 config.go:328] "Starting node config controller"
	I1205 21:47:11.614331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:47:11.713168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:47:11.713279       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:47:11.714885       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705] <==
	W1205 21:47:01.752817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:47:01.752843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.686867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:47:02.686904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.797936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 21:47:02.798003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.838885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 21:47:02.838985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.855940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 21:47:02.856027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.883517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:02.883627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.896109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 21:47:02.896230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.921764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:02.921849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.950748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 21:47:02.950803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.951821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 21:47:02.951865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.984709       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 21:47:02.984875       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 21:47:03.023796       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:03.024836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 21:47:04.735019       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 21:55:08 embed-certs-425614 kubelet[2923]: E1205 21:55:08.315011    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:55:14 embed-certs-425614 kubelet[2923]: E1205 21:55:14.460229    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435714460044524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:14 embed-certs-425614 kubelet[2923]: E1205 21:55:14.460253    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435714460044524,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:23 embed-certs-425614 kubelet[2923]: E1205 21:55:23.313507    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:55:24 embed-certs-425614 kubelet[2923]: E1205 21:55:24.464515    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435724464168476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:24 embed-certs-425614 kubelet[2923]: E1205 21:55:24.464598    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435724464168476,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:34 embed-certs-425614 kubelet[2923]: E1205 21:55:34.466199    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435734465910226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:34 embed-certs-425614 kubelet[2923]: E1205 21:55:34.466226    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435734465910226,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:35 embed-certs-425614 kubelet[2923]: E1205 21:55:35.312822    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:55:44 embed-certs-425614 kubelet[2923]: E1205 21:55:44.467918    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435744467511548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:44 embed-certs-425614 kubelet[2923]: E1205 21:55:44.467956    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435744467511548,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:46 embed-certs-425614 kubelet[2923]: E1205 21:55:46.314279    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:55:54 embed-certs-425614 kubelet[2923]: E1205 21:55:54.469852    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435754468916880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:55:54 embed-certs-425614 kubelet[2923]: E1205 21:55:54.469913    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435754468916880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:01 embed-certs-425614 kubelet[2923]: E1205 21:56:01.313663    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]: E1205 21:56:04.336728    2923 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]: E1205 21:56:04.472580    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435764472010298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:04 embed-certs-425614 kubelet[2923]: E1205 21:56:04.472650    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435764472010298,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:13 embed-certs-425614 kubelet[2923]: E1205 21:56:13.312864    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 21:56:14 embed-certs-425614 kubelet[2923]: E1205 21:56:14.476681    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435774475388784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 21:56:14 embed-certs-425614 kubelet[2923]: E1205 21:56:14.477020    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435774475388784,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f] <==
	I1205 21:47:11.295822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:47:11.334199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:47:11.334440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:47:11.371864       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:47:11.373668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94!
	I1205 21:47:11.376537       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"427b20b9-9f21-41d8-9d42-0a1360548170", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94 became leader
	I1205 21:47:11.479882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425614 -n embed-certs-425614
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-425614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hghhs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs: exit status 1 (70.347852ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hghhs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:50:09.868706  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:50:38.233211  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:50:47.608691  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:51:10.574263  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:51:29.760846  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:51:49.076165  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:52:10.676024  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:52:19.896363  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:52:32.944313  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:52:33.640577  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:52:52.828247  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:53:16.319371  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:53:42.962424  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:53:46.804757  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:53:56.011057  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:54:15.167201  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:54:52.152258  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:55:47.609287  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:56:29.761561  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:56:49.076946  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:57:19.896015  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:57:32.945345  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:58:16.320065  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:58:46.805516  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (248.353691ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-601806" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (249.675993ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25: (1.6526114s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.699078520Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435941699048542,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ca414303-4cc4-4984-aa14-d336ed9a0c88 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.699783854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cc8f5226-93ad-4090-9569-828fea97ae1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.699863043Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cc8f5226-93ad-4090-9569-828fea97ae1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.699913834Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cc8f5226-93ad-4090-9569-828fea97ae1a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.735119909Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=61d10cb2-392c-4aa4-969c-ec7387461d95 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.735300923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=61d10cb2-392c-4aa4-969c-ec7387461d95 name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.736875063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=69331e59-33b5-474d-95b3-fea86b309c87 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.737342401Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435941737317926,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=69331e59-33b5-474d-95b3-fea86b309c87 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.737998766Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3822079-85f5-4526-b71d-b85054e09932 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.738049417Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3822079-85f5-4526-b71d-b85054e09932 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.738083333Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=c3822079-85f5-4526-b71d-b85054e09932 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.774733786Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=26bd58aa-e1c2-4025-81a8-4d2edca8363d name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.774860809Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=26bd58aa-e1c2-4025-81a8-4d2edca8363d name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.776360910Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cd51606a-3d1f-4802-a860-b93b7ca78fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.776884344Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435941776859929,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cd51606a-3d1f-4802-a860-b93b7ca78fa3 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.777503081Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=07fe7583-cd3e-4508-af9d-8e03fdbbf3e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.777593774Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=07fe7583-cd3e-4508-af9d-8e03fdbbf3e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.777647654Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=07fe7583-cd3e-4508-af9d-8e03fdbbf3e9 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.811478899Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=722e1b42-ee5b-421c-9921-79e53674ff8d name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.811563254Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=722e1b42-ee5b-421c-9921-79e53674ff8d name=/runtime.v1.RuntimeService/Version
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.813182042Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3fd9d788-348c-409f-9257-630d71cce353 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.813584131Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733435941813550755,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3fd9d788-348c-409f-9257-630d71cce353 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.814205815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e4f2a64b-9d2f-43ac-99a1-223e7d774b4c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.814274197Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e4f2a64b-9d2f-43ac-99a1-223e7d774b4c name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 21:59:01 old-k8s-version-601806 crio[631]: time="2024-12-05 21:59:01.814307286Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e4f2a64b-9d2f-43ac-99a1-223e7d774b4c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049612] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037328] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041940] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591176] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000028] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.089329] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.075166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084879] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.248458] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.177247] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.251172] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361303] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.072375] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.856883] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Dec 5 21:42] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 21:45] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Dec 5 21:48] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.068423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 21:59:01 up 17 min,  0 users,  load average: 0.02, 0.04, 0.02
	Linux old-k8s-version-601806 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bba470, 0xc000bdc260)
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: goroutine 154 [chan receive]:
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(0xc0001020c0, 0xc000bda360)
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:130 +0x34
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/controller.go:129 +0xa5
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: goroutine 155 [select]:
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000b16ef0, 0x4f0ac20, 0xc000b0da90, 0x1, 0xc0001020c0)
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc000255180, 0xc0001020c0)
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000bba4a0, 0xc000bdc320)
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 05 21:59:01 old-k8s-version-601806 kubelet[6519]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 05 21:59:01 old-k8s-version-601806 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 05 21:59:01 old-k8s-version-601806 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 21:59:02 old-k8s-version-601806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 115.
	Dec 05 21:59:02 old-k8s-version-601806 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 21:59:02 old-k8s-version-601806 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (265.558996ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-601806" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.76s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (474.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 22:03:03.857983323 +0000 UTC m=+6225.832575997
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.483µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-751353 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-751353 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-751353 logs -n 25: (1.286942488s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC | 05 Dec 24 22:01 UTC |
	| start   | -p newest-cni-185514 --memory=2200 --alsologtostderr   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC | 05 Dec 24 22:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	| addons  | enable metrics-server -p newest-cni-185514             | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-185514                                   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-185514                  | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-185514 --memory=2200 --alsologtostderr   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 22:02:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 22:02:41.566747  365282 out.go:345] Setting OutFile to fd 1 ...
	I1205 22:02:41.566876  365282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:02:41.566886  365282 out.go:358] Setting ErrFile to fd 2...
	I1205 22:02:41.566890  365282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:02:41.567062  365282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 22:02:41.567626  365282 out.go:352] Setting JSON to false
	I1205 22:02:41.568627  365282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17110,"bootTime":1733419052,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 22:02:41.568760  365282 start.go:139] virtualization: kvm guest
	I1205 22:02:41.571051  365282 out.go:177] * [newest-cni-185514] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 22:02:41.572514  365282 notify.go:220] Checking for updates...
	I1205 22:02:41.572546  365282 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 22:02:41.574187  365282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 22:02:41.575510  365282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 22:02:41.576795  365282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 22:02:41.578099  365282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 22:02:41.579376  365282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 22:02:41.581092  365282 config.go:182] Loaded profile config "newest-cni-185514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:02:41.581567  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.581646  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.597932  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I1205 22:02:41.598598  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.599266  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.599291  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.599682  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.599887  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.600198  365282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 22:02:41.600657  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.600717  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.616423  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1205 22:02:41.616897  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.617462  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.617492  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.617917  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.618159  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.659009  365282 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 22:02:41.660363  365282 start.go:297] selected driver: kvm2
	I1205 22:02:41.660387  365282 start.go:901] validating driver "kvm2" against &{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.210 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 22:02:41.660527  365282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 22:02:41.661424  365282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:02:41.661508  365282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 22:02:41.678498  365282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 22:02:41.679052  365282 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 22:02:41.679097  365282 cni.go:84] Creating CNI manager for ""
	I1205 22:02:41.679156  365282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 22:02:41.679208  365282 start.go:340] cluster config:
	{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.210 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 22:02:41.679359  365282 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:02:41.681324  365282 out.go:177] * Starting "newest-cni-185514" primary control-plane node in "newest-cni-185514" cluster
	I1205 22:02:41.682593  365282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 22:02:41.682657  365282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 22:02:41.682675  365282 cache.go:56] Caching tarball of preloaded images
	I1205 22:02:41.682791  365282 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 22:02:41.682802  365282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 22:02:41.682927  365282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/newest-cni-185514/config.json ...
	I1205 22:02:41.683199  365282 start.go:360] acquireMachinesLock for newest-cni-185514: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 22:02:41.683281  365282 start.go:364] duration metric: took 40.843µs to acquireMachinesLock for "newest-cni-185514"
	I1205 22:02:41.683300  365282 start.go:96] Skipping create...Using existing machine configuration
	I1205 22:02:41.683305  365282 fix.go:54] fixHost starting: 
	I1205 22:02:41.683605  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.683648  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.699214  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1205 22:02:41.699778  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.700345  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.700372  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.700748  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.700962  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.701145  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetState
	I1205 22:02:41.703048  365282 fix.go:112] recreateIfNeeded on newest-cni-185514: state=Stopped err=<nil>
	I1205 22:02:41.703100  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	W1205 22:02:41.703322  365282 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 22:02:41.705169  365282 out.go:177] * Restarting existing kvm2 VM for "newest-cni-185514" ...
	I1205 22:02:41.706352  365282 main.go:141] libmachine: (newest-cni-185514) Calling .Start
	I1205 22:02:41.706593  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring networks are active...
	I1205 22:02:41.707601  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring network default is active
	I1205 22:02:41.708024  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring network mk-newest-cni-185514 is active
	I1205 22:02:41.708481  365282 main.go:141] libmachine: (newest-cni-185514) Getting domain xml...
	I1205 22:02:41.709328  365282 main.go:141] libmachine: (newest-cni-185514) Creating domain...
	I1205 22:02:42.998023  365282 main.go:141] libmachine: (newest-cni-185514) Waiting to get IP...
	I1205 22:02:42.998891  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:42.999294  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:42.999411  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:42.999275  365323 retry.go:31] will retry after 260.11984ms: waiting for machine to come up
	I1205 22:02:43.260945  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.261509  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.261550  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.261453  365323 retry.go:31] will retry after 310.809568ms: waiting for machine to come up
	I1205 22:02:43.574214  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.574789  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.574820  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.574730  365323 retry.go:31] will retry after 363.850051ms: waiting for machine to come up
	I1205 22:02:43.940354  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.940906  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.940930  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.940845  365323 retry.go:31] will retry after 474.321777ms: waiting for machine to come up
	I1205 22:02:44.416353  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:44.416890  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:44.416924  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:44.416841  365323 retry.go:31] will retry after 529.8788ms: waiting for machine to come up
	I1205 22:02:44.948310  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:44.948835  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:44.948865  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:44.948778  365323 retry.go:31] will retry after 666.109954ms: waiting for machine to come up
	I1205 22:02:45.616162  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:45.616649  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:45.616679  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:45.616604  365323 retry.go:31] will retry after 906.29229ms: waiting for machine to come up
	I1205 22:02:46.524699  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:46.525141  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:46.525172  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:46.525079  365323 retry.go:31] will retry after 1.189512655s: waiting for machine to come up
	I1205 22:02:47.716509  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:47.717051  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:47.717099  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:47.717005  365323 retry.go:31] will retry after 1.446137981s: waiting for machine to come up
	I1205 22:02:49.165687  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:49.166281  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:49.166315  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:49.166222  365323 retry.go:31] will retry after 1.483394504s: waiting for machine to come up
	I1205 22:02:50.652111  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:50.652694  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:50.652724  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:50.652646  365323 retry.go:31] will retry after 1.970602566s: waiting for machine to come up
	I1205 22:02:52.625412  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:52.625940  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:52.625963  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:52.625877  365323 retry.go:31] will retry after 2.967675719s: waiting for machine to come up
	I1205 22:02:55.595092  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:55.595557  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:55.595588  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:55.595490  365323 retry.go:31] will retry after 3.557884632s: waiting for machine to come up
	I1205 22:02:59.155130  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.155789  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has current primary IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.155832  365282 main.go:141] libmachine: (newest-cni-185514) Found IP for machine: 192.168.61.210
	I1205 22:02:59.155846  365282 main.go:141] libmachine: (newest-cni-185514) Reserving static IP address...
	I1205 22:02:59.156357  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "newest-cni-185514", mac: "52:54:00:01:ae:fb", ip: "192.168.61.210"} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.156382  365282 main.go:141] libmachine: (newest-cni-185514) DBG | skip adding static IP to network mk-newest-cni-185514 - found existing host DHCP lease matching {name: "newest-cni-185514", mac: "52:54:00:01:ae:fb", ip: "192.168.61.210"}
	I1205 22:02:59.156392  365282 main.go:141] libmachine: (newest-cni-185514) Reserved static IP address: 192.168.61.210
	I1205 22:02:59.156403  365282 main.go:141] libmachine: (newest-cni-185514) Waiting for SSH to be available...
	I1205 22:02:59.156416  365282 main.go:141] libmachine: (newest-cni-185514) DBG | Getting to WaitForSSH function...
	I1205 22:02:59.158735  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.159063  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.159107  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.159223  365282 main.go:141] libmachine: (newest-cni-185514) DBG | Using SSH client type: external
	I1205 22:02:59.159253  365282 main.go:141] libmachine: (newest-cni-185514) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa (-rw-------)
	I1205 22:02:59.159301  365282 main.go:141] libmachine: (newest-cni-185514) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.210 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 22:02:59.159320  365282 main.go:141] libmachine: (newest-cni-185514) DBG | About to run SSH command:
	I1205 22:02:59.159333  365282 main.go:141] libmachine: (newest-cni-185514) DBG | exit 0
	I1205 22:02:59.285840  365282 main.go:141] libmachine: (newest-cni-185514) DBG | SSH cmd err, output: <nil>: 
	I1205 22:02:59.286229  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetConfigRaw
	I1205 22:02:59.286924  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetIP
	I1205 22:02:59.289505  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.289883  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.289942  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.290184  365282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/newest-cni-185514/config.json ...
	I1205 22:02:59.290420  365282 machine.go:93] provisionDockerMachine start ...
	I1205 22:02:59.290440  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:59.290676  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.293167  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.293443  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.293471  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.293629  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:02:59.293814  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.293990  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.294138  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:02:59.294299  365282 main.go:141] libmachine: Using SSH client type: native
	I1205 22:02:59.294559  365282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.210 22 <nil> <nil>}
	I1205 22:02:59.294573  365282 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 22:02:59.406436  365282 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 22:02:59.406474  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetMachineName
	I1205 22:02:59.406838  365282 buildroot.go:166] provisioning hostname "newest-cni-185514"
	I1205 22:02:59.406902  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetMachineName
	I1205 22:02:59.407134  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.409916  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.410267  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.410297  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.410522  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:02:59.410762  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.410960  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.411114  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:02:59.411290  365282 main.go:141] libmachine: Using SSH client type: native
	I1205 22:02:59.411503  365282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.210 22 <nil> <nil>}
	I1205 22:02:59.411530  365282 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-185514 && echo "newest-cni-185514" | sudo tee /etc/hostname
	I1205 22:02:59.537716  365282 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-185514
	
	I1205 22:02:59.537752  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.540601  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.540932  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.540965  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.541216  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:02:59.541435  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.541643  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.541800  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:02:59.542058  365282 main.go:141] libmachine: Using SSH client type: native
	I1205 22:02:59.542256  365282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.210 22 <nil> <nil>}
	I1205 22:02:59.542281  365282 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-185514' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-185514/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-185514' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 22:02:59.662779  365282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 22:02:59.662816  365282 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 22:02:59.662850  365282 buildroot.go:174] setting up certificates
	I1205 22:02:59.662862  365282 provision.go:84] configureAuth start
	I1205 22:02:59.662874  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetMachineName
	I1205 22:02:59.663284  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetIP
	I1205 22:02:59.666273  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.666706  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.666735  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.666976  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.669367  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.669804  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.669846  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.670092  365282 provision.go:143] copyHostCerts
	I1205 22:02:59.670167  365282 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 22:02:59.670194  365282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 22:02:59.670279  365282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 22:02:59.670433  365282 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 22:02:59.670447  365282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 22:02:59.670490  365282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 22:02:59.670577  365282 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 22:02:59.670587  365282 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 22:02:59.670621  365282 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 22:02:59.670689  365282 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.newest-cni-185514 san=[127.0.0.1 192.168.61.210 localhost minikube newest-cni-185514]
	I1205 22:02:59.737506  365282 provision.go:177] copyRemoteCerts
	I1205 22:02:59.737581  365282 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 22:02:59.737619  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.740446  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.740836  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.740879  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.741059  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:02:59.741295  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.741437  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:02:59.741640  365282 sshutil.go:53] new ssh client: &{IP:192.168.61.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa Username:docker}
	I1205 22:02:59.828515  365282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 22:02:59.852878  365282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 22:02:59.877940  365282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 22:02:59.907740  365282 provision.go:87] duration metric: took 244.860052ms to configureAuth
	I1205 22:02:59.907778  365282 buildroot.go:189] setting minikube options for container-runtime
	I1205 22:02:59.908029  365282 config.go:182] Loaded profile config "newest-cni-185514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:02:59.908135  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:02:59.911176  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.911569  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:02:59.911597  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:59.911872  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:02:59.912111  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.912298  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:02:59.912427  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:02:59.912559  365282 main.go:141] libmachine: Using SSH client type: native
	I1205 22:02:59.912753  365282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.210 22 <nil> <nil>}
	I1205 22:02:59.912774  365282 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 22:03:00.145229  365282 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 22:03:00.145272  365282 machine.go:96] duration metric: took 854.83652ms to provisionDockerMachine
	I1205 22:03:00.145290  365282 start.go:293] postStartSetup for "newest-cni-185514" (driver="kvm2")
	I1205 22:03:00.145304  365282 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 22:03:00.145334  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:03:00.145708  365282 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 22:03:00.145741  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:03:00.148861  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.149319  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:00.149357  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.149519  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:03:00.149750  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:03:00.149923  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:03:00.150058  365282 sshutil.go:53] new ssh client: &{IP:192.168.61.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa Username:docker}
	I1205 22:03:00.236900  365282 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 22:03:00.241324  365282 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 22:03:00.241363  365282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 22:03:00.241434  365282 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 22:03:00.241510  365282 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 22:03:00.241606  365282 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 22:03:00.251498  365282 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 22:03:00.276943  365282 start.go:296] duration metric: took 131.632511ms for postStartSetup
	I1205 22:03:00.277005  365282 fix.go:56] duration metric: took 18.593697232s for fixHost
	I1205 22:03:00.277039  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:03:00.280134  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.280454  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:00.280489  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.280649  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:03:00.280884  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:03:00.281054  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:03:00.281245  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:03:00.281473  365282 main.go:141] libmachine: Using SSH client type: native
	I1205 22:03:00.281716  365282 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.210 22 <nil> <nil>}
	I1205 22:03:00.281734  365282 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 22:03:00.394884  365282 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733436180.368126525
	
	I1205 22:03:00.394923  365282 fix.go:216] guest clock: 1733436180.368126525
	I1205 22:03:00.394934  365282 fix.go:229] Guest: 2024-12-05 22:03:00.368126525 +0000 UTC Remote: 2024-12-05 22:03:00.277011745 +0000 UTC m=+18.754769564 (delta=91.11478ms)
	I1205 22:03:00.394976  365282 fix.go:200] guest clock delta is within tolerance: 91.11478ms
	I1205 22:03:00.394985  365282 start.go:83] releasing machines lock for "newest-cni-185514", held for 18.71169147s
	I1205 22:03:00.395016  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:03:00.395332  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetIP
	I1205 22:03:00.398345  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.398721  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:00.398757  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.398892  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:03:00.399462  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:03:00.399699  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:03:00.399803  365282 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 22:03:00.399872  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:03:00.399947  365282 ssh_runner.go:195] Run: cat /version.json
	I1205 22:03:00.399977  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHHostname
	I1205 22:03:00.402667  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.402876  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.403091  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:00.403117  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.403233  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:03:00.403381  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:00.403405  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:00.403412  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:03:00.403569  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHPort
	I1205 22:03:00.403573  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:03:00.403755  365282 sshutil.go:53] new ssh client: &{IP:192.168.61.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa Username:docker}
	I1205 22:03:00.403771  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHKeyPath
	I1205 22:03:00.403951  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetSSHUsername
	I1205 22:03:00.404091  365282 sshutil.go:53] new ssh client: &{IP:192.168.61.210 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa Username:docker}
	I1205 22:03:00.504622  365282 ssh_runner.go:195] Run: systemctl --version
	I1205 22:03:00.510813  365282 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 22:03:00.651600  365282 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 22:03:00.657380  365282 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 22:03:00.657447  365282 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 22:03:00.672937  365282 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 22:03:00.672970  365282 start.go:495] detecting cgroup driver to use...
	I1205 22:03:00.673038  365282 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 22:03:00.691220  365282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 22:03:00.705497  365282 docker.go:217] disabling cri-docker service (if available) ...
	I1205 22:03:00.705592  365282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 22:03:00.722091  365282 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 22:03:00.736936  365282 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 22:03:00.844130  365282 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 22:03:00.988807  365282 docker.go:233] disabling docker service ...
	I1205 22:03:00.988907  365282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 22:03:01.002961  365282 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 22:03:01.016246  365282 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 22:03:01.143836  365282 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 22:03:01.271294  365282 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 22:03:01.289044  365282 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 22:03:01.307960  365282 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 22:03:01.308043  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.318729  365282 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 22:03:01.318816  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.329552  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.340374  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.351245  365282 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 22:03:01.362439  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.373394  365282 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.391441  365282 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 22:03:01.401811  365282 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 22:03:01.411592  365282 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 22:03:01.411684  365282 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 22:03:01.425420  365282 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 22:03:01.435839  365282 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 22:03:01.556602  365282 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 22:03:01.652035  365282 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 22:03:01.652155  365282 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 22:03:01.657255  365282 start.go:563] Will wait 60s for crictl version
	I1205 22:03:01.657333  365282 ssh_runner.go:195] Run: which crictl
	I1205 22:03:01.661606  365282 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 22:03:01.702984  365282 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 22:03:01.703083  365282 ssh_runner.go:195] Run: crio --version
	I1205 22:03:01.731310  365282 ssh_runner.go:195] Run: crio --version
	I1205 22:03:01.761847  365282 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 22:03:01.763310  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetIP
	I1205 22:03:01.766349  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:01.766723  365282 main.go:141] libmachine: (newest-cni-185514) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:01:ae:fb", ip: ""} in network mk-newest-cni-185514: {Iface:virbr3 ExpiryTime:2024-12-05 23:02:52 +0000 UTC Type:0 Mac:52:54:00:01:ae:fb Iaid: IPaddr:192.168.61.210 Prefix:24 Hostname:newest-cni-185514 Clientid:01:52:54:00:01:ae:fb}
	I1205 22:03:01.766757  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined IP address 192.168.61.210 and MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:03:01.766942  365282 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 22:03:01.771574  365282 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 22:03:01.785651  365282 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	
	
	==> CRI-O <==
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.511841276Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436184511817885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=99b6f0aa-be67-45b9-a9b6-657ec1ced102 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.512546054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d7c440b-605f-4652-956b-8d8d087a6c28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.512601282Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d7c440b-605f-4652-956b-8d8d087a6c28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.512823145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d7c440b-605f-4652-956b-8d8d087a6c28 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.558735495Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=dd3df4f0-6ccd-47b4-ba98-eb9cccad44c2 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.558864239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=dd3df4f0-6ccd-47b4-ba98-eb9cccad44c2 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.560209906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c0d7b76d-e481-4017-84fd-8ad00d2a4245 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.560596360Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436184560573880,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c0d7b76d-e481-4017-84fd-8ad00d2a4245 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.561241985Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4e467978-d832-48c6-a389-12da2b6062ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.561300242Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4e467978-d832-48c6-a389-12da2b6062ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.561489678Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4e467978-d832-48c6-a389-12da2b6062ea name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.598604776Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=15f43f5d-203e-4702-a7ba-1b28c46946ae name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.598743263Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=15f43f5d-203e-4702-a7ba-1b28c46946ae name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.599611831Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7234867f-98ca-4b7d-9840-96640d6e5a95 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.600070374Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436184600046993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7234867f-98ca-4b7d-9840-96640d6e5a95 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.600577770Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ba75ee79-3d1e-43a7-9cfb-95fee3c82c43 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.600633163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ba75ee79-3d1e-43a7-9cfb-95fee3c82c43 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.600867638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=ba75ee79-3d1e-43a7-9cfb-95fee3c82c43 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.640896284Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=500fa301-e002-4759-b50f-ff130f56fb31 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.641003282Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=500fa301-e002-4759-b50f-ff130f56fb31 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.642993628Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6b2cfeb2-dece-4a60-8b72-4069be0b3958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.643526859Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436184643496993,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6b2cfeb2-dece-4a60-8b72-4069be0b3958 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.644313157Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f5f5b2ce-d958-4d89-8096-0066487c8a05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.644478929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f5f5b2ce-d958-4d89-8096-0066487c8a05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:03:04 default-k8s-diff-port-751353 crio[726]: time="2024-12-05 22:03:04.644824658Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4e62bb2ea85199b40c1d637b1ed55f60113cf19b84b544a3e975dc2e04534f05,PodSandboxId:9d003a914dd5ba8e6709447a7ccdaaf70b846524f6e258c6e0da7e7d53ece3d2,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1733434908604786336,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7f734192-b575-49f2-8488-2e08e14d83e5,},Annotations:map[string]string{io.kubernetes.container.hash: 35e73d3c,io.kubernetes.container.restartCount: 1,io.kubernete
s.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6,PodSandboxId:56e5a64605dbe821b5fbc7f5e704b2c25b5b0e11eca7fd6b0c83c6d8e098b94e,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733434906241921791,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-mll8z,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fcea0826-1093-43ce-87d0-26fb19447609,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733434899182159017,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,
io.kubernetes.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa,PodSandboxId:8b1373a23f8337dee45a5b2207d04ce77cf26eb15c4b105698c92af8cb947d96,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1733434898578875848,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: aabf9cc9-c416-4db2-97b0-23533dd76c28,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d,PodSandboxId:20ab7bb2040edb1d011d37784aea1661af162cbffe7317c581160c1ad1a07bf2,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733434898496525033,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-b4ws4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2620959-e3e4-4575-af26
-243207a83495,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7,PodSandboxId:bb297f7199c472d8bf106e49137c92af9aed17c24f0a5e8bd46734144e2f9a10,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733434894007594885,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a781f16b4aef7bf5ac0b18a81d3fe56,},Annotations:map[
string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5,PodSandboxId:a1686895467fd0475c3f9bbc904ee56c4014382b540049631331b334ac3a4b22,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733434894002322433,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb1306bd7c52f126431147d34dc0a3b9,},Annotations:map[st
ring]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828,PodSandboxId:e119343ecc82aa38d4b5ded6ae3d75aafe40c2bf2179394792f6e97254caebad,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733434893996921176,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cbfce6421a68ed116afc3485728da556,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a,PodSandboxId:09aebd00aa0803b6848384a4ec3e4cf3726e41ada8ab1a226bc538cb9c4bd0c1,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733434893992267903,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-751353,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ea0710b5375ef6778cfbcb0941880
cf,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f5f5b2ce-d958-4d89-8096-0066487c8a05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	4e62bb2ea8519       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   21 minutes ago      Running             busybox                   1                   9d003a914dd5b       busybox
	d4ac290ffeedd       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                      21 minutes ago      Running             coredns                   1                   56e5a64605dbe       coredns-7c65d6cfc9-mll8z
	7befce79ea834       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   8b1373a23f833       storage-provisioner
	37f783b4a3402       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Exited              storage-provisioner       1                   8b1373a23f833       storage-provisioner
	963fc5fe0f7ee       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38                                      21 minutes ago      Running             kube-proxy                1                   20ab7bb2040ed       kube-proxy-b4ws4
	035df011d5399       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                      21 minutes ago      Running             etcd                      1                   bb297f7199c47       etcd-default-k8s-diff-port-751353
	c0ddf1d7f97da       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856                                      21 minutes ago      Running             kube-scheduler            1                   a1686895467fd       kube-scheduler-default-k8s-diff-port-751353
	079fc145d3515       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173                                      21 minutes ago      Running             kube-apiserver            1                   e119343ecc82a       kube-apiserver-default-k8s-diff-port-751353
	807e6454204d4       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503                                      21 minutes ago      Running             kube-controller-manager   1                   09aebd00aa080       kube-controller-manager-default-k8s-diff-port-751353
	
	
	==> coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:54520 - 38740 "HINFO IN 3167697831049979112.9156028796695991744. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.023726293s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-751353
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-751353
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=default-k8s-diff-port-751353
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_34_03_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:33:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-751353
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 22:03:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 22:02:30 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 22:02:30 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 22:02:30 +0000   Thu, 05 Dec 2024 21:33:58 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 22:02:30 +0000   Thu, 05 Dec 2024 21:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.106
	  Hostname:    default-k8s-diff-port-751353
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 bf8fa31002994e3bbd1b630b66bd1bb0
	  System UUID:                bf8fa310-0299-4e3b-bd1b-630b66bd1bb0
	  Boot ID:                    70c62d7e-3965-465e-be09-c9d4335900ea
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-7c65d6cfc9-mll8z                                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     28m
	  kube-system                 etcd-default-k8s-diff-port-751353                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-751353             250m (12%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-751353    200m (10%)    0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-b4ws4                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 kube-scheduler-default-k8s-diff-port-751353             100m (5%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 metrics-server-6867b74b74-xb867                         100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         28m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             370Mi (17%)  170Mi (8%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-751353 status is now: NodeReady
	  Normal  RegisteredNode           28m                node-controller  Node default-k8s-diff-port-751353 event: Registered Node default-k8s-diff-port-751353 in Controller
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-751353 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node default-k8s-diff-port-751353 event: Registered Node default-k8s-diff-port-751353 in Controller
	
	
	==> dmesg <==
	[Dec 5 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.050400] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.039583] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.032560] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.156592] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +1.574549] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.883258] systemd-fstab-generator[649]: Ignoring "noauto" option for root device
	[  +0.072479] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.070065] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +0.226647] systemd-fstab-generator[675]: Ignoring "noauto" option for root device
	[  +0.133826] systemd-fstab-generator[687]: Ignoring "noauto" option for root device
	[  +0.301970] systemd-fstab-generator[717]: Ignoring "noauto" option for root device
	[  +4.175176] systemd-fstab-generator[811]: Ignoring "noauto" option for root device
	[  +2.237863] systemd-fstab-generator[931]: Ignoring "noauto" option for root device
	[  +0.066644] kauditd_printk_skb: 158 callbacks suppressed
	[  +5.548187] kauditd_printk_skb: 69 callbacks suppressed
	[  +1.928214] systemd-fstab-generator[1609]: Ignoring "noauto" option for root device
	[  +3.772023] kauditd_printk_skb: 69 callbacks suppressed
	
	
	==> etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] <==
	{"level":"info","ts":"2024-12-05T21:41:53.834965Z","caller":"traceutil/trace.go:171","msg":"trace[1427262695] transaction","detail":"{read_only:false; response_revision:621; number_of_response:1; }","duration":"948.115695ms","start":"2024-12-05T21:41:52.886838Z","end":"2024-12-05T21:41:53.834954Z","steps":["trace[1427262695] 'process raft request'  (duration: 530.269593ms)","trace[1427262695] 'compare'  (duration: 416.656084ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:53.835456Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:52.886807Z","time spent":"948.591999ms","remote":"127.0.0.1:47084","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5732,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" mod_revision:620 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" value_size:5664 >> failure:<request_range:<key:\"/registry/pods/kube-system/etcd-default-k8s-diff-port-751353\" > >"}
	{"level":"warn","ts":"2024-12-05T21:41:53.835139Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"183.646199ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1128"}
	{"level":"info","ts":"2024-12-05T21:41:53.838846Z","caller":"traceutil/trace.go:171","msg":"trace[1540693106] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:621; }","duration":"187.34144ms","start":"2024-12-05T21:41:53.651485Z","end":"2024-12-05T21:41:53.838827Z","steps":["trace[1540693106] 'agreement among raft nodes before linearized reading'  (duration: 183.628736ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:54.355251Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"387.353452ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938279879332373161 > lease_revoke:<id:17cc9398c568a125>","response":"size:27"}
	{"level":"info","ts":"2024-12-05T21:41:54.355391Z","caller":"traceutil/trace.go:171","msg":"trace[1499642662] linearizableReadLoop","detail":"{readStateIndex:658; appliedIndex:657; }","duration":"514.177155ms","start":"2024-12-05T21:41:53.841195Z","end":"2024-12-05T21:41:54.355372Z","steps":["trace[1499642662] 'read index received'  (duration: 126.614015ms)","trace[1499642662] 'applied index is now lower than readState.Index'  (duration: 387.559553ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T21:41:54.355691Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"514.424998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-751353\" ","response":"range_response_count:1 size:5537"}
	{"level":"info","ts":"2024-12-05T21:41:54.355749Z","caller":"traceutil/trace.go:171","msg":"trace[1702041016] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-751353; range_end:; response_count:1; response_revision:621; }","duration":"514.543024ms","start":"2024-12-05T21:41:53.841192Z","end":"2024-12-05T21:41:54.355735Z","steps":["trace[1702041016] 'agreement among raft nodes before linearized reading'  (duration: 514.332762ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T21:41:54.355787Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-05T21:41:53.841159Z","time spent":"514.617056ms","remote":"127.0.0.1:47068","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5559,"request content":"key:\"/registry/minions/default-k8s-diff-port-751353\" "}
	{"level":"info","ts":"2024-12-05T21:51:36.024039Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":874}
	{"level":"info","ts":"2024-12-05T21:51:36.033821Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":874,"took":"9.602281ms","hash":2051551245,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":2801664,"current-db-size-in-use":"2.8 MB"}
	{"level":"info","ts":"2024-12-05T21:51:36.033875Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2051551245,"revision":874,"compact-revision":-1}
	{"level":"info","ts":"2024-12-05T21:56:36.032245Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1117}
	{"level":"info","ts":"2024-12-05T21:56:36.036419Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1117,"took":"3.654311ms","hash":1390492790,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1650688,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2024-12-05T21:56:36.036508Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1390492790,"revision":1117,"compact-revision":874}
	{"level":"info","ts":"2024-12-05T22:01:36.039488Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1361}
	{"level":"info","ts":"2024-12-05T22:01:36.042955Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1361,"took":"3.168021ms","hash":20523030,"current-db-size-bytes":2801664,"current-db-size":"2.8 MB","current-db-size-in-use-bytes":1613824,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-05T22:01:36.043014Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":20523030,"revision":1361,"compact-revision":1117}
	{"level":"warn","ts":"2024-12-05T22:02:18.928596Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.229969ms","expected-duration":"100ms","prefix":"","request":"header:<ID:10938279879332381292 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.39.106\" mod_revision:1630 > success:<request_put:<key:\"/registry/masterleases/192.168.39.106\" value_size:67 lease:1714907842477605482 >> failure:<request_range:<key:\"/registry/masterleases/192.168.39.106\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-12-05T22:02:18.928933Z","caller":"traceutil/trace.go:171","msg":"trace[1194578233] linearizableReadLoop","detail":"{readStateIndex:1928; appliedIndex:1927; }","duration":"213.590921ms","start":"2024-12-05T22:02:18.715328Z","end":"2024-12-05T22:02:18.928919Z","steps":["trace[1194578233] 'read index received'  (duration: 82.927036ms)","trace[1194578233] 'applied index is now lower than readState.Index'  (duration: 130.662418ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T22:02:18.929056Z","caller":"traceutil/trace.go:171","msg":"trace[1017185239] transaction","detail":"{read_only:false; response_revision:1638; number_of_response:1; }","duration":"252.085698ms","start":"2024-12-05T22:02:18.676954Z","end":"2024-12-05T22:02:18.929040Z","steps":["trace[1017185239] 'process raft request'  (duration: 121.340746ms)","trace[1017185239] 'compare'  (duration: 130.126892ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-05T22:02:18.929181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.823734ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/\" range_end:\"/registry/priorityclasses0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-12-05T22:02:18.929254Z","caller":"traceutil/trace.go:171","msg":"trace[1811029903] range","detail":"{range_begin:/registry/priorityclasses/; range_end:/registry/priorityclasses0; response_count:0; response_revision:1638; }","duration":"213.918964ms","start":"2024-12-05T22:02:18.715324Z","end":"2024-12-05T22:02:18.929243Z","steps":["trace[1811029903] 'agreement among raft nodes before linearized reading'  (duration: 213.703187ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-05T22:02:18.929329Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"130.904973ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-05T22:02:18.929372Z","caller":"traceutil/trace.go:171","msg":"trace[1567806978] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1638; }","duration":"130.951555ms","start":"2024-12-05T22:02:18.798414Z","end":"2024-12-05T22:02:18.929366Z","steps":["trace[1567806978] 'agreement among raft nodes before linearized reading'  (duration: 130.889384ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:03:04 up 21 min,  0 users,  load average: 0.14, 0.16, 0.11
	Linux default-k8s-diff-port-751353 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] <==
	I1205 21:59:38.322304       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:59:38.322439       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 22:01:37.319288       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:01:37.319380       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 22:01:38.321718       1 handler_proxy.go:99] no RequestInfo found in the context
	W1205 22:01:38.321718       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:01:38.321912       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1205 22:01:38.321964       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 22:01:38.323088       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 22:01:38.323150       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 22:02:38.323749       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:02:38.323917       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 22:02:38.323800       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:02:38.324047       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 22:02:38.325142       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 22:02:38.325183       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] <==
	E1205 21:57:40.827896       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:57:41.526034       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:58:05.131636       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="93.855µs"
	E1205 21:58:10.833606       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:11.533470       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:58:16.131750       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="55.751µs"
	E1205 21:58:40.839616       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:41.540606       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:10.846315       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:11.549066       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:40.852282       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:41.556892       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:10.858003       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:11.563956       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:40.863577       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:41.573119       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:10.868895       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:11.581043       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:40.875356       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:41.589193       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:02:10.881882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:02:11.599534       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 22:02:30.796505       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="default-k8s-diff-port-751353"
	E1205 22:02:40.887956       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:02:41.610233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:41:38.780116       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:41:38.794762       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.39.106"]
	E1205 21:41:38.795855       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:41:38.862879       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:41:38.862966       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:41:38.863000       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:41:38.870238       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:41:38.870494       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:41:38.870518       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:41:38.876992       1 config.go:199] "Starting service config controller"
	I1205 21:41:38.877046       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:41:38.877092       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:41:38.877109       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:41:38.885711       1 config.go:328] "Starting node config controller"
	I1205 21:41:38.885745       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:41:38.977290       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:41:38.977303       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:41:38.986437       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] <==
	I1205 21:41:35.076157       1 serving.go:386] Generated self-signed cert in-memory
	W1205 21:41:37.252913       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1205 21:41:37.252993       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1205 21:41:37.253003       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1205 21:41:37.253009       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1205 21:41:37.310536       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1205 21:41:37.310577       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:41:37.317842       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1205 21:41:37.320706       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1205 21:41:37.320773       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1205 21:41:37.320810       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1205 21:41:37.421738       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 22:02:13 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:13.375408     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436133375039410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:13 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:13.375442     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436133375039410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:15 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:15.112912     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 22:02:23 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:23.377072     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436143376571794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:23 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:23.377107     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436143376571794,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:30 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:30.112358     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:33.126187     938 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:33.382761     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436153381845078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:33 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:33.382785     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436153381845078,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:41 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:41.112072     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 22:02:43 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:43.385701     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436163385194023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:43 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:43.385776     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436163385194023,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:53 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:53.386980     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436173386610377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:53 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:53.387006     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436173386610377,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:54 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:54.125017     938 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 22:02:54 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:54.125174     938 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 22:02:54 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:54.125782     938 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glkc8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPr
opagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:
nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-xb867_kube-system(6ac4cc31-ed56-44b9-9a83-76296436bc34): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 05 22:02:54 default-k8s-diff-port-751353 kubelet[938]: E1205 22:02:54.127154     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	Dec 05 22:03:03 default-k8s-diff-port-751353 kubelet[938]: E1205 22:03:03.388801     938 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436183388224138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:03:03 default-k8s-diff-port-751353 kubelet[938]: E1205 22:03:03.390304     938 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436183388224138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:03:05 default-k8s-diff-port-751353 kubelet[938]: E1205 22:03:05.121396     938 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-xb867" podUID="6ac4cc31-ed56-44b9-9a83-76296436bc34"
	
	
	==> storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] <==
	I1205 21:41:38.716764       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1205 21:41:38.718483       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] <==
	I1205 21:41:39.255930       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:41:39.280708       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:41:39.280788       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:41:56.871247       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:41:56.871456       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386!
	I1205 21:41:56.873150       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"c4dde28b-76d4-40f2-9ca4-c00393ecc5f1", APIVersion:"v1", ResourceVersion:"622", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386 became leader
	I1205 21:41:56.972693       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-751353_b5b8d612-8e7c-4fc5-b985-dbe7d0086386!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-xb867
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867: exit status 1 (69.323042ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-xb867" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-751353 describe pod metrics-server-6867b74b74-xb867: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (474.99s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (403.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500648 -n no-preload-500648
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 22:02:55.245619143 +0000 UTC m=+6217.220211811
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-500648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-500648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.695µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-500648 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-500648 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-500648 logs -n 25: (1.17607828s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC | 05 Dec 24 22:01 UTC |
	| start   | -p newest-cni-185514 --memory=2200 --alsologtostderr   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC | 05 Dec 24 22:02 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| delete  | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	| addons  | enable metrics-server -p newest-cni-185514             | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-185514                                   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-185514                  | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC | 05 Dec 24 22:02 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-185514 --memory=2200 --alsologtostderr   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:02 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 22:02:41
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 22:02:41.566747  365282 out.go:345] Setting OutFile to fd 1 ...
	I1205 22:02:41.566876  365282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:02:41.566886  365282 out.go:358] Setting ErrFile to fd 2...
	I1205 22:02:41.566890  365282 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:02:41.567062  365282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 22:02:41.567626  365282 out.go:352] Setting JSON to false
	I1205 22:02:41.568627  365282 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17110,"bootTime":1733419052,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 22:02:41.568760  365282 start.go:139] virtualization: kvm guest
	I1205 22:02:41.571051  365282 out.go:177] * [newest-cni-185514] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 22:02:41.572514  365282 notify.go:220] Checking for updates...
	I1205 22:02:41.572546  365282 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 22:02:41.574187  365282 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 22:02:41.575510  365282 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 22:02:41.576795  365282 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 22:02:41.578099  365282 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 22:02:41.579376  365282 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 22:02:41.581092  365282 config.go:182] Loaded profile config "newest-cni-185514": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:02:41.581567  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.581646  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.597932  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39407
	I1205 22:02:41.598598  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.599266  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.599291  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.599682  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.599887  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.600198  365282 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 22:02:41.600657  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.600717  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.616423  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I1205 22:02:41.616897  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.617462  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.617492  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.617917  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.618159  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.659009  365282 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 22:02:41.660363  365282 start.go:297] selected driver: kvm2
	I1205 22:02:41.660387  365282 start.go:901] validating driver "kvm2" against &{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.210 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] St
artHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 22:02:41.660527  365282 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 22:02:41.661424  365282 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:02:41.661508  365282 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 22:02:41.678498  365282 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 22:02:41.679052  365282 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 22:02:41.679097  365282 cni.go:84] Creating CNI manager for ""
	I1205 22:02:41.679156  365282 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 22:02:41.679208  365282 start.go:340] cluster config:
	{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.210 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network
: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 22:02:41.679359  365282 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:02:41.681324  365282 out.go:177] * Starting "newest-cni-185514" primary control-plane node in "newest-cni-185514" cluster
	I1205 22:02:41.682593  365282 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 22:02:41.682657  365282 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 22:02:41.682675  365282 cache.go:56] Caching tarball of preloaded images
	I1205 22:02:41.682791  365282 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 22:02:41.682802  365282 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 22:02:41.682927  365282 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/newest-cni-185514/config.json ...
	I1205 22:02:41.683199  365282 start.go:360] acquireMachinesLock for newest-cni-185514: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 22:02:41.683281  365282 start.go:364] duration metric: took 40.843µs to acquireMachinesLock for "newest-cni-185514"
	I1205 22:02:41.683300  365282 start.go:96] Skipping create...Using existing machine configuration
	I1205 22:02:41.683305  365282 fix.go:54] fixHost starting: 
	I1205 22:02:41.683605  365282 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:02:41.683648  365282 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:02:41.699214  365282 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43155
	I1205 22:02:41.699778  365282 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:02:41.700345  365282 main.go:141] libmachine: Using API Version  1
	I1205 22:02:41.700372  365282 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:02:41.700748  365282 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:02:41.700962  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:02:41.701145  365282 main.go:141] libmachine: (newest-cni-185514) Calling .GetState
	I1205 22:02:41.703048  365282 fix.go:112] recreateIfNeeded on newest-cni-185514: state=Stopped err=<nil>
	I1205 22:02:41.703100  365282 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	W1205 22:02:41.703322  365282 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 22:02:41.705169  365282 out.go:177] * Restarting existing kvm2 VM for "newest-cni-185514" ...
	I1205 22:02:41.706352  365282 main.go:141] libmachine: (newest-cni-185514) Calling .Start
	I1205 22:02:41.706593  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring networks are active...
	I1205 22:02:41.707601  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring network default is active
	I1205 22:02:41.708024  365282 main.go:141] libmachine: (newest-cni-185514) Ensuring network mk-newest-cni-185514 is active
	I1205 22:02:41.708481  365282 main.go:141] libmachine: (newest-cni-185514) Getting domain xml...
	I1205 22:02:41.709328  365282 main.go:141] libmachine: (newest-cni-185514) Creating domain...
	I1205 22:02:42.998023  365282 main.go:141] libmachine: (newest-cni-185514) Waiting to get IP...
	I1205 22:02:42.998891  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:42.999294  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:42.999411  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:42.999275  365323 retry.go:31] will retry after 260.11984ms: waiting for machine to come up
	I1205 22:02:43.260945  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.261509  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.261550  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.261453  365323 retry.go:31] will retry after 310.809568ms: waiting for machine to come up
	I1205 22:02:43.574214  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.574789  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.574820  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.574730  365323 retry.go:31] will retry after 363.850051ms: waiting for machine to come up
	I1205 22:02:43.940354  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:43.940906  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:43.940930  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:43.940845  365323 retry.go:31] will retry after 474.321777ms: waiting for machine to come up
	I1205 22:02:44.416353  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:44.416890  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:44.416924  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:44.416841  365323 retry.go:31] will retry after 529.8788ms: waiting for machine to come up
	I1205 22:02:44.948310  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:44.948835  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:44.948865  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:44.948778  365323 retry.go:31] will retry after 666.109954ms: waiting for machine to come up
	I1205 22:02:45.616162  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:45.616649  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:45.616679  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:45.616604  365323 retry.go:31] will retry after 906.29229ms: waiting for machine to come up
	I1205 22:02:46.524699  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:46.525141  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:46.525172  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:46.525079  365323 retry.go:31] will retry after 1.189512655s: waiting for machine to come up
	I1205 22:02:47.716509  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:47.717051  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:47.717099  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:47.717005  365323 retry.go:31] will retry after 1.446137981s: waiting for machine to come up
	I1205 22:02:49.165687  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:49.166281  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:49.166315  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:49.166222  365323 retry.go:31] will retry after 1.483394504s: waiting for machine to come up
	I1205 22:02:50.652111  365282 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:02:50.652694  365282 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:02:50.652724  365282 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:02:50.652646  365323 retry.go:31] will retry after 1.970602566s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.903661601Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:15490fb312b07ccea32025a4a72c58459b6d4e9a5bb3597f12e089e98a5ec391,Metadata:&PodSandboxMetadata{Name:metrics-server-6867b74b74-ftmzl,Uid:c541d531-1622-4528-af4c-f6147f47e8f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435218045032914,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-6867b74b74-ftmzl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c541d531-1622-4528-af4c-f6147f47e8f5,k8s-app: metrics-server,pod-template-hash: 6867b74b74,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:57.724415921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-tmd2t,Uid:e3e98611-66c3-4647-8870-bff5ff6ec5
96,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435218012736751,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.798956893Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&PodSandboxMetadata{Name:coredns-7c65d6cfc9-6gw87,Uid:5551f12d-28e2-4abc-aa12-df5e94a50df9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435217981284804,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,k8s-app: kube-dns,pod-template-hash: 7c65d6cfc9,},Annotations:ma
p[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.769074890Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:62bd3876-3f92-4cc1-9e07-860628e8a746,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435217927413669,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[
{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-12-05T21:46:57.620490301Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&PodSandboxMetadata{Name:kube-proxy-98xqk,Uid:4b383ba3-46c2-45df-9035-270593e44817,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435216952628137,Labels:map[string]string{controller-revision-hash: 77987969cc,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,k8s-app: kube-proxy,pod-temp
late-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:46:56.633085062Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-500648,Uid:8f9ba4fbfce2011ed6b44c9b7b199059,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1733435206179383805,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.141:8443,kubernetes.io/config.hash: 8f9ba4fbfce2011ed6b44c9b7b199059,kubernetes.io/config.seen: 2024-12-05T21:46:45.717381589Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6cdc57b59a4dd01b027825e47904136
98cfaf9c8b274b304d325f689d39ba9e9,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-500648,Uid:3a2b3c191ea04e6e57d1e374543e8cd8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206173949548,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3a2b3c191ea04e6e57d1e374543e8cd8,kubernetes.io/config.seen: 2024-12-05T21:46:45.717383194Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-500648,Uid:74f0143772b58305ead4f000b0489269,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206172779604,Labels:map[string]string{component: kube-sch
eduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 74f0143772b58305ead4f000b0489269,kubernetes.io/config.seen: 2024-12-05T21:46:45.717384435Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-500648,Uid:534df59254648301964f51a82b53e9f5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1733435206145850157,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.50.141:237
9,kubernetes.io/config.hash: 534df59254648301964f51a82b53e9f5,kubernetes.io/config.seen: 2024-12-05T21:46:45.717377794Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-500648,Uid:8f9ba4fbfce2011ed6b44c9b7b199059,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1733434884382824506,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.141:8443,kubernetes.io/config.hash: 8f9ba4fbfce2011ed6b44c9b7b199059,kubernetes.io/config.seen: 2024-12-05T21:41:23.883355886Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/inter
ceptors.go:74" id=23b700d9-b9b2-43ec-b5d9-0c83670eb593 name=/runtime.v1.RuntimeService/ListPodSandbox
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.904740589Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dd61897b-bbbf-4f34-9f81-28a5dda48574 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.904817879Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dd61897b-bbbf-4f34-9f81-28a5dda48574 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.905058209Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=dd61897b-bbbf-4f34-9f81-28a5dda48574 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.916772280Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b4f46162-64f7-4279-891f-08591976ff7f name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.916909256Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b4f46162-64f7-4279-891f-08591976ff7f name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.917989848Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bc75a5fc-151f-41d4-bf40-89210bef63eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.918346440Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436175918325260,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bc75a5fc-151f-41d4-bf40-89210bef63eb name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.918928484Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=33415aa0-3b78-43e9-b4a7-6403a2649fec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.918999071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=33415aa0-3b78-43e9-b4a7-6403a2649fec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.919205399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=33415aa0-3b78-43e9-b4a7-6403a2649fec name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.956152082Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c2ff5518-fe45-4647-8bc7-e05982159765 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.956238576Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c2ff5518-fe45-4647-8bc7-e05982159765 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.957776181Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d580acc5-208d-4b4e-a996-9cbe72a9f667 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.958181864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436175958157481,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d580acc5-208d-4b4e-a996-9cbe72a9f667 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.958713109Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3bc330d7-259d-4197-a253-f6e66ba27836 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.958797888Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3bc330d7-259d-4197-a253-f6e66ba27836 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.959052512Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3bc330d7-259d-4197-a253-f6e66ba27836 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.990924138Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=9180fc45-f26e-46db-ac04-c0029569dcb6 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.991014646Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=9180fc45-f26e-46db-ac04-c0029569dcb6 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.992082454Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8beb7d85-01cd-4fd8-acdf-fd138d00bb12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.992438102Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436175992413702,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8beb7d85-01cd-4fd8-acdf-fd138d00bb12 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.993597029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=218646fa-b3a7-46ee-8642-d6b407248424 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.993675662Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=218646fa-b3a7-46ee-8642-d6b407248424 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:02:55 no-preload-500648 crio[687]: time="2024-12-05 22:02:55.993904891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca,PodSandboxId:012977c74613bb1720da7e0d5acbe081adab153776167f865d95643a9668b44a,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218547276635,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-6gw87,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5551f12d-28e2-4abc-aa12-df5e94a50df9,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74,PodSandboxId:538515779b7919230d18ba35bffc18b0865175f928a0a65684ca783b5f4f020b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435218479041686,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-tmd2t,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: e3e98611-66c3-4647-8870-bff5ff6ec596,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609,PodSandboxId:361a95cd215fb403e99c3fa2e404f5038202484c464d4a51199859a79da4b1c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAIN
ER_RUNNING,CreatedAt:1733435218059351573,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62bd3876-3f92-4cc1-9e07-860628e8a746,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391,PodSandboxId:52dc5528b975b42aceb0813ac4cc0e6c8ae5b338bee5ffd91c7bce5f9f471b6a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:
1733435217188604884,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-98xqk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4b383ba3-46c2-45df-9035-270593e44817,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45,PodSandboxId:1c91ea3707be6b24dc66b0fd8838f8c56b0bf74fb3879962f8ecac761edce6f1,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435206402788511,Labels:ma
p[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1,PodSandboxId:6cdc57b59a4dd01b027825e4790413698cfaf9c8b274b304d325f689d39ba9e9,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435206361713539,Labels:map[st
ring]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3a2b3c191ea04e6e57d1e374543e8cd8,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff,PodSandboxId:bae9f949b109ccc37f6c406b7dc95396ab0cb00c9a3166f7deebab8fa8c9512d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435206330927431,Labels:m
ap[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 74f0143772b58305ead4f000b0489269,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d,PodSandboxId:e573fc3e5a5a8a68cb60239e84214b829b46be3f04f629b8a5ee432a6335188f,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435206281976571,Labels:map[string]string{io.kub
ernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 534df59254648301964f51a82b53e9f5,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0,PodSandboxId:7195904c388be7d68e545b2a9779552d18c82ad355bddbd21da183180b38ec1f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434906069473348,Labels:map[string]string{io.kubernetes.container.name: kube-apise
rver,io.kubernetes.pod.name: kube-apiserver-no-preload-500648,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f9ba4fbfce2011ed6b44c9b7b199059,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=218646fa-b3a7-46ee-8642-d6b407248424 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	087e0b2f7a7df       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   012977c74613b       coredns-7c65d6cfc9-6gw87
	9d95068628f7a       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   15 minutes ago      Running             coredns                   0                   538515779b791       coredns-7c65d6cfc9-tmd2t
	fe59d0c476ff3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   361a95cd215fb       storage-provisioner
	5d868f29f315f       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   15 minutes ago      Running             kube-proxy                0                   52dc5528b975b       kube-proxy-98xqk
	76e8165328f38       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   16 minutes ago      Running             kube-apiserver            3                   1c91ea3707be6       kube-apiserver-no-preload-500648
	3bf7c136a1c6a       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   16 minutes ago      Running             kube-controller-manager   3                   6cdc57b59a4dd       kube-controller-manager-no-preload-500648
	b609bf884c7b1       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   16 minutes ago      Running             kube-scheduler            2                   bae9f949b109c       kube-scheduler-no-preload-500648
	891d12f2aecd4       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   16 minutes ago      Running             etcd                      2                   e573fc3e5a5a8       etcd-no-preload-500648
	19c9db624e744       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   21 minutes ago      Exited              kube-apiserver            2                   7195904c388be       kube-apiserver-no-preload-500648
	
	
	==> coredns [087e0b2f7a7dfc800c8489c6e4915feccbbb0a7180d0fc60e81d83e6159bfdca] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [9d95068628f7a4f75baa7cdeb056c7635904ec594d5dbe087c63f0630b935a74] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               no-preload-500648
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-500648
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=no-preload-500648
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:46:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-500648
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 22:02:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 22:02:20 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 22:02:20 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 22:02:20 +0000   Thu, 05 Dec 2024 21:46:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 22:02:20 +0000   Thu, 05 Dec 2024 21:46:49 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.141
	  Hostname:    no-preload-500648
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 428b18567a3a4babac6b0eb6f1fd7e37
	  System UUID:                428b1856-7a3a-4bab-ac6b-0eb6f1fd7e37
	  Boot ID:                    c82a09e6-d6b6-43e4-a4ca-e1582e96988f
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-6gw87                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 coredns-7c65d6cfc9-tmd2t                     100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     16m
	  kube-system                 etcd-no-preload-500648                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         16m
	  kube-system                 kube-apiserver-no-preload-500648             250m (12%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-no-preload-500648    200m (10%)    0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-98xqk                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-no-preload-500648             100m (5%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 metrics-server-6867b74b74-ftmzl              100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         15m
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 16m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  16m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  16m   kubelet          Node no-preload-500648 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m   kubelet          Node no-preload-500648 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m   kubelet          Node no-preload-500648 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m   node-controller  Node no-preload-500648 event: Registered Node no-preload-500648 in Controller
	
	
	==> dmesg <==
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049394] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037291] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.854835] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.019629] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.531198] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000005] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 21:41] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.139573] systemd-fstab-generator[621]: Ignoring "noauto" option for root device
	[  +0.197240] systemd-fstab-generator[635]: Ignoring "noauto" option for root device
	[  +0.119071] systemd-fstab-generator[647]: Ignoring "noauto" option for root device
	[  +0.293727] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[ +15.291396] systemd-fstab-generator[1278]: Ignoring "noauto" option for root device
	[  +0.063546] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.980694] systemd-fstab-generator[1399]: Ignoring "noauto" option for root device
	[ +22.448628] kauditd_printk_skb: 90 callbacks suppressed
	[Dec 5 21:42] kauditd_printk_skb: 93 callbacks suppressed
	[Dec 5 21:46] systemd-fstab-generator[3190]: Ignoring "noauto" option for root device
	[  +0.059367] kauditd_printk_skb: 9 callbacks suppressed
	[  +5.987561] systemd-fstab-generator[3516]: Ignoring "noauto" option for root device
	[  +0.081296] kauditd_printk_skb: 55 callbacks suppressed
	[  +5.300160] systemd-fstab-generator[3628]: Ignoring "noauto" option for root device
	[  +0.136640] kauditd_printk_skb: 12 callbacks suppressed
	[Dec 5 21:47] kauditd_printk_skb: 86 callbacks suppressed
	
	
	==> etcd [891d12f2aecd42662af6bfad075f5aa3e2f96677f0ffe96f6e10591fe9a2c43d] <==
	{"level":"info","ts":"2024-12-05T21:46:47.573980Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.573986Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 received MsgVoteResp from 77f789e98544c480 at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.573995Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"77f789e98544c480 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.574003Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 77f789e98544c480 elected leader 77f789e98544c480 at term 2"}
	{"level":"info","ts":"2024-12-05T21:46:47.575639Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.576559Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"77f789e98544c480","local-member-attributes":"{Name:no-preload-500648 ClientURLs:[https://192.168.50.141:2379]}","request-path":"/0/members/77f789e98544c480/attributes","cluster-id":"2bba191a4e9d4ee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:46:47.576762Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:46:47.576972Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2bba191a4e9d4ee","local-member-id":"77f789e98544c480","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577070Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577110Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:46:47.577121Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:46:47.578251Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:46:47.579274Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.141:2379"}
	{"level":"info","ts":"2024-12-05T21:46:47.579840Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:46:47.579921Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:46:47.586917Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:46:47.587615Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:56:47.633211Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":685}
	{"level":"info","ts":"2024-12-05T21:56:47.642176Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":685,"took":"8.700236ms","hash":2471893031,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":2273280,"current-db-size-in-use":"2.3 MB"}
	{"level":"info","ts":"2024-12-05T21:56:47.642247Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2471893031,"revision":685,"compact-revision":-1}
	{"level":"info","ts":"2024-12-05T22:01:47.641706Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":928}
	{"level":"info","ts":"2024-12-05T22:01:47.645432Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":928,"took":"3.386175ms","hash":3061588362,"current-db-size-bytes":2273280,"current-db-size":"2.3 MB","current-db-size-in-use-bytes":1560576,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2024-12-05T22:01:47.645494Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3061588362,"revision":928,"compact-revision":685}
	{"level":"info","ts":"2024-12-05T22:02:00.831484Z","caller":"traceutil/trace.go:171","msg":"trace[468166991] transaction","detail":"{read_only:false; response_revision:1183; number_of_response:1; }","duration":"103.106291ms","start":"2024-12-05T22:02:00.728321Z","end":"2024-12-05T22:02:00.831427Z","steps":["trace[468166991] 'process raft request'  (duration: 62.773226ms)","trace[468166991] 'compare'  (duration: 40.206975ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-05T22:02:18.699279Z","caller":"traceutil/trace.go:171","msg":"trace[786387455] transaction","detail":"{read_only:false; response_revision:1197; number_of_response:1; }","duration":"115.521565ms","start":"2024-12-05T22:02:18.583738Z","end":"2024-12-05T22:02:18.699259Z","steps":["trace[786387455] 'process raft request'  (duration: 115.375523ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:02:56 up 22 min,  0 users,  load average: 0.25, 0.19, 0.18
	Linux no-preload-500648 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [19c9db624e744ddc44c62936546527303f9c95c606ad4be4cc28baae923d15c0] <==
	W1205 21:46:42.464455       1 logging.go:55] [core] [Channel #120 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.540579       1 logging.go:55] [core] [Channel #168 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.586724       1 logging.go:55] [core] [Channel #48 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.607286       1 logging.go:55] [core] [Channel #165 SubChannel #166]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.607535       1 logging.go:55] [core] [Channel #117 SubChannel #118]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.612223       1 logging.go:55] [core] [Channel #114 SubChannel #115]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.634792       1 logging.go:55] [core] [Channel #126 SubChannel #127]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.675187       1 logging.go:55] [core] [Channel #9 SubChannel #10]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.676566       1 logging.go:55] [core] [Channel #75 SubChannel #76]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.680165       1 logging.go:55] [core] [Channel #66 SubChannel #67]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.685029       1 logging.go:55] [core] [Channel #147 SubChannel #148]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.753777       1 logging.go:55] [core] [Channel #205 SubChannel #206]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.832340       1 logging.go:55] [core] [Channel #111 SubChannel #112]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:42.989135       1 logging.go:55] [core] [Channel #159 SubChannel #160]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.158163       1 logging.go:55] [core] [Channel #45 SubChannel #46]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.180718       1 logging.go:55] [core] [Channel #60 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.262393       1 logging.go:55] [core] [Channel #84 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.294317       1 logging.go:55] [core] [Channel #132 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.300667       1 logging.go:55] [core] [Channel #102 SubChannel #103]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.339262       1 logging.go:55] [core] [Channel #72 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.354153       1 logging.go:55] [core] [Channel #156 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.382969       1 logging.go:55] [core] [Channel #162 SubChannel #163]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.421228       1 logging.go:55] [core] [Channel #81 SubChannel #82]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.498078       1 logging.go:55] [core] [Channel #87 SubChannel #88]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:43.589678       1 logging.go:55] [core] [Channel #54 SubChannel #55]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [76e8165328f382874c555fada9f0ce608c5cee4f310fc9c81700165ece5cda45] <==
	I1205 21:59:49.913394       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:59:49.913480       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 22:01:48.912266       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:01:48.912390       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 22:01:49.913573       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:01:49.913651       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	W1205 22:01:49.913693       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:01:49.913749       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1205 22:01:49.914848       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 22:01:49.914922       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 22:02:49.915375       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:02:49.915468       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 22:02:49.915384       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:02:49.915490       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 22:02:49.916662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 22:02:49.916716       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [3bf7c136a1c6a5406b8ed1d5e21edd95f345b1425a19fe359a7e4fb41b92b3f1] <==
	I1205 21:57:26.461142       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:57:55.991990       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:57:56.469553       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:58:05.419469       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="63.933µs"
	I1205 21:58:17.418075       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="52.324µs"
	E1205 21:58:25.997231       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:26.481498       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:58:56.003390       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:56.488329       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:26.009949       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:26.496798       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:56.016291       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:56.504965       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:26.021533       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:26.513230       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:56.027622       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:56.520644       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:26.034078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:26.529014       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:56.041477       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:56.536681       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 22:02:20.422010       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="no-preload-500648"
	E1205 22:02:26.047950       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:02:26.548305       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:02:56.053757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	
	
	==> kube-proxy [5d868f29f315f449c4cbd7111ed57dfac7aacb0b35d2cb453b082fc6807ef391] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:46:57.643170       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:46:57.659523       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.50.141"]
	E1205 21:46:57.659604       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:46:57.886173       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:46:57.886228       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:46:57.886258       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:46:57.898547       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:46:57.899094       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:46:57.899220       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:46:57.900618       1 config.go:199] "Starting service config controller"
	I1205 21:46:57.900806       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:46:57.900906       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:46:57.900912       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:46:57.903416       1 config.go:328] "Starting node config controller"
	I1205 21:46:57.903502       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:46:58.001939       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:46:58.001970       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:46:58.003609       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b609bf884c7b1a9f9646257bab7c927b2c904925e5374ef393008dcc69ffb9ff] <==
	W1205 21:46:49.857734       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:49.857831       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.861964       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:49.862164       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.914958       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:46:49.915222       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:49.979126       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 21:46:49.979177       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.000819       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 21:46:50.000897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.011644       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1205 21:46:50.011693       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.063469       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 21:46:50.063543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.184652       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 21:46:50.184707       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.196163       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1205 21:46:50.196212       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.221088       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1205 21:46:50.221191       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.242284       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 21:46:50.243482       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:46:50.399835       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 21:46:50.399915       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1205 21:46:53.369563       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 22:02:01 no-preload-500648 kubelet[3523]: E1205 22:02:01.665411    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436121665128296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:01 no-preload-500648 kubelet[3523]: E1205 22:02:01.665488    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436121665128296,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:06 no-preload-500648 kubelet[3523]: E1205 22:02:06.401441    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 22:02:11 no-preload-500648 kubelet[3523]: E1205 22:02:11.667254    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436131666929324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:11 no-preload-500648 kubelet[3523]: E1205 22:02:11.667287    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436131666929324,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:19 no-preload-500648 kubelet[3523]: E1205 22:02:19.401664    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 22:02:21 no-preload-500648 kubelet[3523]: E1205 22:02:21.668811    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436141668425283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:21 no-preload-500648 kubelet[3523]: E1205 22:02:21.669201    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436141668425283,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:30 no-preload-500648 kubelet[3523]: E1205 22:02:30.402019    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 22:02:31 no-preload-500648 kubelet[3523]: E1205 22:02:31.671809    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436151670745522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:31 no-preload-500648 kubelet[3523]: E1205 22:02:31.671916    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436151670745522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:41 no-preload-500648 kubelet[3523]: E1205 22:02:41.674044    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436161673019439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:41 no-preload-500648 kubelet[3523]: E1205 22:02:41.674071    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436161673019439,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:45 no-preload-500648 kubelet[3523]: E1205 22:02:45.403254    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]: E1205 22:02:51.412759    3523 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]: E1205 22:02:51.676730    3523 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436171676307292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:51 no-preload-500648 kubelet[3523]: E1205 22:02:51.676769    3523 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436171676307292,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:101003,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:02:56 no-preload-500648 kubelet[3523]: E1205 22:02:56.412144    3523 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 22:02:56 no-preload-500648 kubelet[3523]: E1205 22:02:56.412223    3523 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 05 22:02:56 no-preload-500648 kubelet[3523]: E1205 22:02:56.412390    3523 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qjcqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:
nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdi
n:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-6867b74b74-ftmzl_kube-system(c541d531-1622-4528-af4c-f6147f47e8f5): ErrImagePull: pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 05 22:02:56 no-preload-500648 kubelet[3523]: E1205 22:02:56.413788    3523 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-6867b74b74-ftmzl" podUID="c541d531-1622-4528-af4c-f6147f47e8f5"
	
	
	==> storage-provisioner [fe59d0c476ff36531824d44d43a5d606de14f86f4a9f33b8d3ff0638d6366609] <==
	I1205 21:46:58.216230       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:46:58.240052       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:46:58.240213       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:46:58.261928       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:46:58.264030       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88!
	I1205 21:46:58.276635       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7539352f-500b-4e33-8dbf-9d5c2a6bcc60", APIVersion:"v1", ResourceVersion:"386", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88 became leader
	I1205 21:46:58.364511       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-500648_24822924-6b2e-4c52-bb27-6ae9f38b2d88!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-500648 -n no-preload-500648
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-500648 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-ftmzl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl: exit status 1 (67.220499ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-ftmzl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-500648 describe pod metrics-server-6867b74b74-ftmzl: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (403.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (336.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425614 -n embed-certs-425614
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-12-05 22:01:58.476028282 +0000 UTC m=+6160.450620952
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-425614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-425614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.077µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-425614 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-425614 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-425614 logs -n 25: (2.095290225s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC | 05 Dec 24 22:01 UTC |
	| start   | -p newest-cni-185514 --memory=2200 --alsologtostderr   | newest-cni-185514            | jenkins | v1.34.0 | 05 Dec 24 22:01 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 22:01:46
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 22:01:46.993644  364554 out.go:345] Setting OutFile to fd 1 ...
	I1205 22:01:46.993776  364554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:01:46.993783  364554 out.go:358] Setting ErrFile to fd 2...
	I1205 22:01:46.993788  364554 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 22:01:46.994029  364554 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 22:01:46.994676  364554 out.go:352] Setting JSON to false
	I1205 22:01:46.995799  364554 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":17055,"bootTime":1733419052,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 22:01:46.995943  364554 start.go:139] virtualization: kvm guest
	I1205 22:01:46.998550  364554 out.go:177] * [newest-cni-185514] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 22:01:47.000030  364554 notify.go:220] Checking for updates...
	I1205 22:01:47.000046  364554 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 22:01:47.001721  364554 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 22:01:47.003330  364554 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 22:01:47.004846  364554 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 22:01:47.006437  364554 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 22:01:47.007914  364554 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 22:01:47.009757  364554 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:01:47.009850  364554 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:01:47.009998  364554 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 22:01:47.010130  364554 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 22:01:47.052086  364554 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 22:01:47.053695  364554 start.go:297] selected driver: kvm2
	I1205 22:01:47.053725  364554 start.go:901] validating driver "kvm2" against <nil>
	I1205 22:01:47.053747  364554 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 22:01:47.054738  364554 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:01:47.054831  364554 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 22:01:47.072306  364554 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 22:01:47.072374  364554 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W1205 22:01:47.072453  364554 out.go:270] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1205 22:01:47.072766  364554 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1205 22:01:47.072808  364554 cni.go:84] Creating CNI manager for ""
	I1205 22:01:47.072886  364554 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 22:01:47.072900  364554 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 22:01:47.072981  364554 start.go:340] cluster config:
	{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 22:01:47.073145  364554 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 22:01:47.075431  364554 out.go:177] * Starting "newest-cni-185514" primary control-plane node in "newest-cni-185514" cluster
	I1205 22:01:47.076708  364554 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 22:01:47.076748  364554 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 22:01:47.076756  364554 cache.go:56] Caching tarball of preloaded images
	I1205 22:01:47.076853  364554 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 22:01:47.076868  364554 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 22:01:47.076968  364554 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/newest-cni-185514/config.json ...
	I1205 22:01:47.076987  364554 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/newest-cni-185514/config.json: {Name:mkca1ee04d75cdc9b4f0fb8ce261212da18dc5c9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 22:01:47.077153  364554 start.go:360] acquireMachinesLock for newest-cni-185514: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 22:01:47.077189  364554 start.go:364] duration metric: took 19.503µs to acquireMachinesLock for "newest-cni-185514"
	I1205 22:01:47.077224  364554 start.go:93] Provisioning new machine with config: &{Name:newest-cni-185514 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.31.2 ClusterName:newest-cni-185514 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount
9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 22:01:47.077319  364554 start.go:125] createHost starting for "" (driver="kvm2")
	I1205 22:01:47.079278  364554 out.go:235] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I1205 22:01:47.079455  364554 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 22:01:47.079503  364554 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 22:01:47.095471  364554 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39305
	I1205 22:01:47.095975  364554 main.go:141] libmachine: () Calling .GetVersion
	I1205 22:01:47.096589  364554 main.go:141] libmachine: Using API Version  1
	I1205 22:01:47.096616  364554 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 22:01:47.097002  364554 main.go:141] libmachine: () Calling .GetMachineName
	I1205 22:01:47.097214  364554 main.go:141] libmachine: (newest-cni-185514) Calling .GetMachineName
	I1205 22:01:47.097406  364554 main.go:141] libmachine: (newest-cni-185514) Calling .DriverName
	I1205 22:01:47.097596  364554 start.go:159] libmachine.API.Create for "newest-cni-185514" (driver="kvm2")
	I1205 22:01:47.097635  364554 client.go:168] LocalClient.Create starting
	I1205 22:01:47.097675  364554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem
	I1205 22:01:47.097709  364554 main.go:141] libmachine: Decoding PEM data...
	I1205 22:01:47.097726  364554 main.go:141] libmachine: Parsing certificate...
	I1205 22:01:47.097781  364554 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem
	I1205 22:01:47.097804  364554 main.go:141] libmachine: Decoding PEM data...
	I1205 22:01:47.097818  364554 main.go:141] libmachine: Parsing certificate...
	I1205 22:01:47.097833  364554 main.go:141] libmachine: Running pre-create checks...
	I1205 22:01:47.097842  364554 main.go:141] libmachine: (newest-cni-185514) Calling .PreCreateCheck
	I1205 22:01:47.098221  364554 main.go:141] libmachine: (newest-cni-185514) Calling .GetConfigRaw
	I1205 22:01:47.098651  364554 main.go:141] libmachine: Creating machine...
	I1205 22:01:47.098665  364554 main.go:141] libmachine: (newest-cni-185514) Calling .Create
	I1205 22:01:47.098850  364554 main.go:141] libmachine: (newest-cni-185514) Creating KVM machine...
	I1205 22:01:47.100658  364554 main.go:141] libmachine: (newest-cni-185514) DBG | found existing default KVM network
	I1205 22:01:47.102128  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.101942  364578 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:c4:d2:3f} reservation:<nil>}
	I1205 22:01:47.103097  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.102983  364578 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:ef:d2:14} reservation:<nil>}
	I1205 22:01:47.104567  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.104428  364578 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000310ba0}
	I1205 22:01:47.104606  364554 main.go:141] libmachine: (newest-cni-185514) DBG | created network xml: 
	I1205 22:01:47.104625  364554 main.go:141] libmachine: (newest-cni-185514) DBG | <network>
	I1205 22:01:47.104635  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   <name>mk-newest-cni-185514</name>
	I1205 22:01:47.104643  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   <dns enable='no'/>
	I1205 22:01:47.104659  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   
	I1205 22:01:47.104671  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   <ip address='192.168.61.1' netmask='255.255.255.0'>
	I1205 22:01:47.104685  364554 main.go:141] libmachine: (newest-cni-185514) DBG |     <dhcp>
	I1205 22:01:47.104698  364554 main.go:141] libmachine: (newest-cni-185514) DBG |       <range start='192.168.61.2' end='192.168.61.253'/>
	I1205 22:01:47.104708  364554 main.go:141] libmachine: (newest-cni-185514) DBG |     </dhcp>
	I1205 22:01:47.104720  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   </ip>
	I1205 22:01:47.104728  364554 main.go:141] libmachine: (newest-cni-185514) DBG |   
	I1205 22:01:47.104734  364554 main.go:141] libmachine: (newest-cni-185514) DBG | </network>
	I1205 22:01:47.104747  364554 main.go:141] libmachine: (newest-cni-185514) DBG | 
	I1205 22:01:47.110700  364554 main.go:141] libmachine: (newest-cni-185514) DBG | trying to create private KVM network mk-newest-cni-185514 192.168.61.0/24...
	I1205 22:01:47.190159  364554 main.go:141] libmachine: (newest-cni-185514) DBG | private KVM network mk-newest-cni-185514 192.168.61.0/24 created
	I1205 22:01:47.190203  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.190113  364578 common.go:144] Making disk image using store path: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 22:01:47.190220  364554 main.go:141] libmachine: (newest-cni-185514) Setting up store path in /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514 ...
	I1205 22:01:47.190243  364554 main.go:141] libmachine: (newest-cni-185514) Building disk image from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 22:01:47.190262  364554 main.go:141] libmachine: (newest-cni-185514) Downloading /home/jenkins/minikube-integration/20053-293485/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso...
	I1205 22:01:47.562370  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.562172  364578 common.go:151] Creating ssh key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/id_rsa...
	I1205 22:01:47.743826  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.743659  364578 common.go:157] Creating raw disk image: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/newest-cni-185514.rawdisk...
	I1205 22:01:47.743862  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Writing magic tar header
	I1205 22:01:47.743881  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Writing SSH key tar header
	I1205 22:01:47.743894  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:47.743804  364578 common.go:171] Fixing permissions on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514 ...
	I1205 22:01:47.743911  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514
	I1205 22:01:47.743982  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514 (perms=drwx------)
	I1205 22:01:47.744013  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube/machines (perms=drwxr-xr-x)
	I1205 22:01:47.744031  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube/machines
	I1205 22:01:47.744046  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485/.minikube (perms=drwxr-xr-x)
	I1205 22:01:47.744077  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins/minikube-integration/20053-293485 (perms=drwxrwxr-x)
	I1205 22:01:47.744093  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 22:01:47.744105  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1205 22:01:47.744121  364554 main.go:141] libmachine: (newest-cni-185514) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1205 22:01:47.744136  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/20053-293485
	I1205 22:01:47.744148  364554 main.go:141] libmachine: (newest-cni-185514) Creating domain...
	I1205 22:01:47.744163  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I1205 22:01:47.744177  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home/jenkins
	I1205 22:01:47.744194  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Checking permissions on dir: /home
	I1205 22:01:47.744206  364554 main.go:141] libmachine: (newest-cni-185514) DBG | Skipping /home - not owner
	I1205 22:01:47.745349  364554 main.go:141] libmachine: (newest-cni-185514) define libvirt domain using xml: 
	I1205 22:01:47.745381  364554 main.go:141] libmachine: (newest-cni-185514) <domain type='kvm'>
	I1205 22:01:47.745393  364554 main.go:141] libmachine: (newest-cni-185514)   <name>newest-cni-185514</name>
	I1205 22:01:47.745404  364554 main.go:141] libmachine: (newest-cni-185514)   <memory unit='MiB'>2200</memory>
	I1205 22:01:47.745413  364554 main.go:141] libmachine: (newest-cni-185514)   <vcpu>2</vcpu>
	I1205 22:01:47.745420  364554 main.go:141] libmachine: (newest-cni-185514)   <features>
	I1205 22:01:47.745425  364554 main.go:141] libmachine: (newest-cni-185514)     <acpi/>
	I1205 22:01:47.745439  364554 main.go:141] libmachine: (newest-cni-185514)     <apic/>
	I1205 22:01:47.745444  364554 main.go:141] libmachine: (newest-cni-185514)     <pae/>
	I1205 22:01:47.745448  364554 main.go:141] libmachine: (newest-cni-185514)     
	I1205 22:01:47.745453  364554 main.go:141] libmachine: (newest-cni-185514)   </features>
	I1205 22:01:47.745461  364554 main.go:141] libmachine: (newest-cni-185514)   <cpu mode='host-passthrough'>
	I1205 22:01:47.745466  364554 main.go:141] libmachine: (newest-cni-185514)   
	I1205 22:01:47.745473  364554 main.go:141] libmachine: (newest-cni-185514)   </cpu>
	I1205 22:01:47.745478  364554 main.go:141] libmachine: (newest-cni-185514)   <os>
	I1205 22:01:47.745486  364554 main.go:141] libmachine: (newest-cni-185514)     <type>hvm</type>
	I1205 22:01:47.745492  364554 main.go:141] libmachine: (newest-cni-185514)     <boot dev='cdrom'/>
	I1205 22:01:47.745496  364554 main.go:141] libmachine: (newest-cni-185514)     <boot dev='hd'/>
	I1205 22:01:47.745502  364554 main.go:141] libmachine: (newest-cni-185514)     <bootmenu enable='no'/>
	I1205 22:01:47.745510  364554 main.go:141] libmachine: (newest-cni-185514)   </os>
	I1205 22:01:47.745515  364554 main.go:141] libmachine: (newest-cni-185514)   <devices>
	I1205 22:01:47.745520  364554 main.go:141] libmachine: (newest-cni-185514)     <disk type='file' device='cdrom'>
	I1205 22:01:47.745531  364554 main.go:141] libmachine: (newest-cni-185514)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/boot2docker.iso'/>
	I1205 22:01:47.745582  364554 main.go:141] libmachine: (newest-cni-185514)       <target dev='hdc' bus='scsi'/>
	I1205 22:01:47.745591  364554 main.go:141] libmachine: (newest-cni-185514)       <readonly/>
	I1205 22:01:47.745596  364554 main.go:141] libmachine: (newest-cni-185514)     </disk>
	I1205 22:01:47.745602  364554 main.go:141] libmachine: (newest-cni-185514)     <disk type='file' device='disk'>
	I1205 22:01:47.745612  364554 main.go:141] libmachine: (newest-cni-185514)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I1205 22:01:47.745620  364554 main.go:141] libmachine: (newest-cni-185514)       <source file='/home/jenkins/minikube-integration/20053-293485/.minikube/machines/newest-cni-185514/newest-cni-185514.rawdisk'/>
	I1205 22:01:47.745628  364554 main.go:141] libmachine: (newest-cni-185514)       <target dev='hda' bus='virtio'/>
	I1205 22:01:47.745632  364554 main.go:141] libmachine: (newest-cni-185514)     </disk>
	I1205 22:01:47.745637  364554 main.go:141] libmachine: (newest-cni-185514)     <interface type='network'>
	I1205 22:01:47.745642  364554 main.go:141] libmachine: (newest-cni-185514)       <source network='mk-newest-cni-185514'/>
	I1205 22:01:47.745653  364554 main.go:141] libmachine: (newest-cni-185514)       <model type='virtio'/>
	I1205 22:01:47.745699  364554 main.go:141] libmachine: (newest-cni-185514)     </interface>
	I1205 22:01:47.745728  364554 main.go:141] libmachine: (newest-cni-185514)     <interface type='network'>
	I1205 22:01:47.745751  364554 main.go:141] libmachine: (newest-cni-185514)       <source network='default'/>
	I1205 22:01:47.745766  364554 main.go:141] libmachine: (newest-cni-185514)       <model type='virtio'/>
	I1205 22:01:47.745797  364554 main.go:141] libmachine: (newest-cni-185514)     </interface>
	I1205 22:01:47.745820  364554 main.go:141] libmachine: (newest-cni-185514)     <serial type='pty'>
	I1205 22:01:47.745843  364554 main.go:141] libmachine: (newest-cni-185514)       <target port='0'/>
	I1205 22:01:47.745851  364554 main.go:141] libmachine: (newest-cni-185514)     </serial>
	I1205 22:01:47.745858  364554 main.go:141] libmachine: (newest-cni-185514)     <console type='pty'>
	I1205 22:01:47.745867  364554 main.go:141] libmachine: (newest-cni-185514)       <target type='serial' port='0'/>
	I1205 22:01:47.745873  364554 main.go:141] libmachine: (newest-cni-185514)     </console>
	I1205 22:01:47.745881  364554 main.go:141] libmachine: (newest-cni-185514)     <rng model='virtio'>
	I1205 22:01:47.745905  364554 main.go:141] libmachine: (newest-cni-185514)       <backend model='random'>/dev/random</backend>
	I1205 22:01:47.745918  364554 main.go:141] libmachine: (newest-cni-185514)     </rng>
	I1205 22:01:47.745928  364554 main.go:141] libmachine: (newest-cni-185514)     
	I1205 22:01:47.745935  364554 main.go:141] libmachine: (newest-cni-185514)     
	I1205 22:01:47.745943  364554 main.go:141] libmachine: (newest-cni-185514)   </devices>
	I1205 22:01:47.745953  364554 main.go:141] libmachine: (newest-cni-185514) </domain>
	I1205 22:01:47.745963  364554 main.go:141] libmachine: (newest-cni-185514) 
	I1205 22:01:47.750747  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:53:ef:75 in network default
	I1205 22:01:47.751388  364554 main.go:141] libmachine: (newest-cni-185514) Ensuring networks are active...
	I1205 22:01:47.751416  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:47.752271  364554 main.go:141] libmachine: (newest-cni-185514) Ensuring network default is active
	I1205 22:01:47.752677  364554 main.go:141] libmachine: (newest-cni-185514) Ensuring network mk-newest-cni-185514 is active
	I1205 22:01:47.753244  364554 main.go:141] libmachine: (newest-cni-185514) Getting domain xml...
	I1205 22:01:47.754173  364554 main.go:141] libmachine: (newest-cni-185514) Creating domain...
	I1205 22:01:49.070611  364554 main.go:141] libmachine: (newest-cni-185514) Waiting to get IP...
	I1205 22:01:49.071413  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:49.071951  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:49.071980  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:49.071932  364578 retry.go:31] will retry after 276.287205ms: waiting for machine to come up
	I1205 22:01:49.349651  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:49.350310  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:49.350342  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:49.350244  364578 retry.go:31] will retry after 258.757149ms: waiting for machine to come up
	I1205 22:01:49.611146  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:49.611701  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:49.611745  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:49.611638  364578 retry.go:31] will retry after 467.113026ms: waiting for machine to come up
	I1205 22:01:50.081146  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:50.081694  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:50.081723  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:50.081650  364578 retry.go:31] will retry after 368.81106ms: waiting for machine to come up
	I1205 22:01:50.452446  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:50.453163  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:50.453193  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:50.453100  364578 retry.go:31] will retry after 665.078931ms: waiting for machine to come up
	I1205 22:01:51.119967  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:51.120461  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:51.120489  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:51.120400  364578 retry.go:31] will retry after 798.98124ms: waiting for machine to come up
	I1205 22:01:51.921718  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:51.922285  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:51.922332  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:51.922245  364578 retry.go:31] will retry after 858.631282ms: waiting for machine to come up
	I1205 22:01:52.783079  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:52.783547  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:52.783583  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:52.783483  364578 retry.go:31] will retry after 1.376392703s: waiting for machine to come up
	I1205 22:01:54.162130  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:54.162639  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:54.162671  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:54.162579  364578 retry.go:31] will retry after 1.497453226s: waiting for machine to come up
	I1205 22:01:55.662194  364554 main.go:141] libmachine: (newest-cni-185514) DBG | domain newest-cni-185514 has defined MAC address 52:54:00:01:ae:fb in network mk-newest-cni-185514
	I1205 22:01:55.662741  364554 main.go:141] libmachine: (newest-cni-185514) DBG | unable to find current IP address of domain newest-cni-185514 in network mk-newest-cni-185514
	I1205 22:01:55.662770  364554 main.go:141] libmachine: (newest-cni-185514) DBG | I1205 22:01:55.662694  364578 retry.go:31] will retry after 1.719536125s: waiting for machine to come up
	
	
	==> CRI-O <==
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.239164501Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436119239118704,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5c59e87e-7dfc-48e2-8101-33d10eccb656 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.240050292Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7109b6e8-fcbf-4255-bac9-b9cce0da6653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.240113595Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7109b6e8-fcbf-4255-bac9-b9cce0da6653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.240336911Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7109b6e8-fcbf-4255-bac9-b9cce0da6653 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.290417597Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3e7988b2-882f-4f9a-84bb-4f03fd996b02 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.290526196Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3e7988b2-882f-4f9a-84bb-4f03fd996b02 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.292333887Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e5c98bd1-c335-479a-9729-d62fdec40588 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.292781402Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436119292757609,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e5c98bd1-c335-479a-9729-d62fdec40588 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.293473868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b1d76e7-2876-47c3-9fa4-2eb33a0c785e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.293625026Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b1d76e7-2876-47c3-9fa4-2eb33a0c785e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.293956821Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b1d76e7-2876-47c3-9fa4-2eb33a0c785e name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312171039Z" level=debug msg="Request: &ImageStatusRequest{Image:&ImageSpec{Image:fake.domain/registry.k8s.io/echoserver:1.4,Annotations:map[string]string{kubernetes.io/config.seen: 2024-12-05T21:47:10.593084981Z,kubernetes.io/config.source: api,},UserSpecifiedImage:,RuntimeHandler:,},Verbose:false,}" file="otel-collector/interceptors.go:62" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312264304Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:27" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312404330Z" level=debug msg="reference \"[overlay@/var/lib/containers/storage+/var/run/containers/storage:overlay.mountopt=nodev,metacopy=on]fake.domain/registry.k8s.io/echoserver:1.4\" does not resolve to an image ID" file="storage/storage_reference.go:149"
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312471689Z" level=debug msg="Can't find fake.domain/registry.k8s.io/echoserver:1.4" file="server/image_status.go:97" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312507599Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:111" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312582472Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" file="server/image_status.go:33" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.312926949Z" level=debug msg="Response: &ImageStatusResponse{Image:nil,Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=e98d98fa-4a14-446f-920a-df034ce8b02a name=/runtime.v1.ImageService/ImageStatus
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.345593296Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=a81f3d74-f9de-4747-a60a-f2b7abf00459 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.345682923Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=a81f3d74-f9de-4747-a60a-f2b7abf00459 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.347059935Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=10ae68fa-657e-4cd8-bbba-6dd7f6332e08 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.347717017Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436119347688827,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=10ae68fa-657e-4cd8-bbba-6dd7f6332e08 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.348212740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fa164107-a194-475f-9b56-168b7804c398 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.348267637Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fa164107-a194-475f-9b56-168b7804c398 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:59 embed-certs-425614 crio[708]: time="2024-12-05 22:01:59.348459043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f,PodSandboxId:d5e243e965acf7a4ec20ab5886c047333bcc92bec711ca3b53058975b60b584a,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1733435231080016079,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 76565dbe-57b0-4d39-abb0-ca6787cd3740,},Annotations:map[string]string{io.kubernetes.container.hash: 6c6bf961,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c,PodSandboxId:fafdcfea82d69c87f7b4059293b70580e2f48abc77da003eb4b39ccddb3e9abf,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435231044996749,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-qfwx8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d6411440-5d63-4ea4-b1ba-58337dd6bb10,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e,PodSandboxId:0be1e5092b64f3ae878c939dd0ff2f5a4bf79a881ed9a1087933d16f29dc4fbf,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38,State:CONTAINER_RUNNING,CreatedAt:1733435230468925238,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-k2zgx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8e5
c4695-0631-486d-9f2b-3529f6e808e9,},Annotations:map[string]string{io.kubernetes.container.hash: adb187fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e,PodSandboxId:7ce18d6bd83c99d48133355c667e370c17e8cf84fbc239a37dbdff9d242a1a05,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,State:CONTAINER_RUNNING,CreatedAt:1733435230865879468,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-7c65d6cfc9-7sjzc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9688302a-e62f-46e6-8182-4639deb5ac
5a,},Annotations:map[string]string{io.kubernetes.container.hash: 2a3a204d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f,PodSandboxId:aa4d76d81862e60315551edd830fac5517fcb517eb1766fb0e1532e5880ab882,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,State:CONTAINER_RUNNING,CreatedAt:1733435219199398936,Labels:map[string]
string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e653d90c677de6c4d7ba5653b9ccf764,},Annotations:map[string]string{io.kubernetes.container.hash: cdf7d3fa,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705,PodSandboxId:4a099a777b0a13e1950c39f9e2ae6f2ef4fe07e112a310911f12c8951d0d4ab3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856,State:CONTAINER_RUNNING,CreatedAt:1733435219164879792,Labels:map[string]string{io.kubernetes.container.n
ame: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 50f5ae899a6b1660ab9bafc72059c48b,},Annotations:map[string]string{io.kubernetes.container.hash: 16c835f9,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4,PodSandboxId:18868d3172717f134a4a286b9312fb40438d1582fe8db408417e15aaa0de99c5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_RUNNING,CreatedAt:1733435219133900320,Labels:map[string]string{io.kubernetes.container.name: kube-ap
iserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd,PodSandboxId:e9e8a7a7ebd9a970317da541cf4dca93da305a480ee96c825ac00eb4c5626323,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503,State:CONTAINER_RUNNING,CreatedAt:1733435219137883676,Labels:map[string]string{io.kubernetes.container.name: kube-contr
oller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 91b7340b20b45b0ab01b5a6dc7d16505,},Annotations:map[string]string{io.kubernetes.container.hash: 3111262b,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c,PodSandboxId:57665c1ccb34c9f37cfcfdae1f0fae47cd7ca1b2214b1b66450d69dbad8a9f89,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173,State:CONTAINER_EXITED,CreatedAt:1733434934000246035,Labels:map[string]string{io.kubernetes.container.name: kube-
apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-425614,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 42a43b696cbdc0d06226089f47a7f1de,},Annotations:map[string]string{io.kubernetes.container.hash: c6927529,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=fa164107-a194-475f-9b56-168b7804c398 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	bb7935ff19768       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   14 minutes ago      Running             storage-provisioner       0                   d5e243e965acf       storage-provisioner
	71f3be3721620       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   fafdcfea82d69       coredns-7c65d6cfc9-qfwx8
	4922644ed14ba       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6   14 minutes ago      Running             coredns                   0                   7ce18d6bd83c9       coredns-7c65d6cfc9-7sjzc
	78e33cb0841af       505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38   14 minutes ago      Running             kube-proxy                0                   0be1e5092b64f       kube-proxy-k2zgx
	25bc947c71adb       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4   15 minutes ago      Running             etcd                      2                   aa4d76d81862e       etcd-embed-certs-425614
	90f378c3660c9       847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856   15 minutes ago      Running             kube-scheduler            2                   4a099a777b0a1       kube-scheduler-embed-certs-425614
	0105f87b7ed06       0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503   15 minutes ago      Running             kube-controller-manager   2                   e9e8a7a7ebd9a       kube-controller-manager-embed-certs-425614
	2a0af9adef57f       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   15 minutes ago      Running             kube-apiserver            2                   18868d3172717       kube-apiserver-embed-certs-425614
	72477542a2b88       9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173   19 minutes ago      Exited              kube-apiserver            1                   57665c1ccb34c       kube-apiserver-embed-certs-425614
	
	
	==> coredns [4922644ed14bab79471b547e8a1e0ba26c1c9beacab332b7e96cfde4145c1d0e] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [71f3be37216209b507caf8f02909aac2eb33c0cd7051f9798c9e7d76a2a3e10c] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> describe nodes <==
	Name:               embed-certs-425614
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-425614
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843
	                    minikube.k8s.io/name=embed-certs-425614
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 05 Dec 2024 21:47:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-425614
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 05 Dec 2024 22:01:51 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 05 Dec 2024 21:57:25 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 05 Dec 2024 21:57:25 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 05 Dec 2024 21:57:25 +0000   Thu, 05 Dec 2024 21:47:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 05 Dec 2024 21:57:25 +0000   Thu, 05 Dec 2024 21:47:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.8
	  Hostname:    embed-certs-425614
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164184Ki
	  pods:               110
	System Info:
	  Machine ID:                 e751443b3fc8433d85c1d5953930bbb4
	  System UUID:                e751443b-3fc8-433d-85c1-d5953930bbb4
	  Boot ID:                    647179da-dc18-4dc7-95ed-bd4273f33f8e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7c65d6cfc9-7sjzc                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 coredns-7c65d6cfc9-qfwx8                      100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     14m
	  kube-system                 etcd-embed-certs-425614                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         14m
	  kube-system                 kube-apiserver-embed-certs-425614             250m (12%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-controller-manager-embed-certs-425614    200m (10%)    0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-proxy-k2zgx                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 kube-scheduler-embed-certs-425614             100m (5%)     0 (0%)      0 (0%)           0 (0%)         14m
	  kube-system                 metrics-server-6867b74b74-hghhs               100m (5%)     0 (0%)      200Mi (9%)       0 (0%)         14m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   0 (0%)
	  memory             440Mi (20%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node embed-certs-425614 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node embed-certs-425614 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node embed-certs-425614 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node embed-certs-425614 event: Registered Node embed-certs-425614 in Controller
	
	
	==> dmesg <==
	[  +0.041669] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.118267] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.097774] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +2.452311] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Dec 5 21:42] systemd-fstab-generator[631]: Ignoring "noauto" option for root device
	[  +0.060768] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066578] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.237585] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[  +0.142392] systemd-fstab-generator[669]: Ignoring "noauto" option for root device
	[  +0.319410] systemd-fstab-generator[698]: Ignoring "noauto" option for root device
	[  +4.285225] systemd-fstab-generator[790]: Ignoring "noauto" option for root device
	[  +0.081250] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.035937] systemd-fstab-generator[911]: Ignoring "noauto" option for root device
	[  +4.730681] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.304809] kauditd_printk_skb: 59 callbacks suppressed
	[Dec 5 21:46] kauditd_printk_skb: 31 callbacks suppressed
	[ +26.137600] kauditd_printk_skb: 7 callbacks suppressed
	[  +1.638714] systemd-fstab-generator[2597]: Ignoring "noauto" option for root device
	[Dec 5 21:47] kauditd_printk_skb: 54 callbacks suppressed
	[  +1.507985] systemd-fstab-generator[2916]: Ignoring "noauto" option for root device
	[  +5.462260] systemd-fstab-generator[3031]: Ignoring "noauto" option for root device
	[  +0.093422] kauditd_printk_skb: 14 callbacks suppressed
	[  +5.154043] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [25bc947c71adb96d313f34316890511c690419a686ced38b9be6cd33028e5b1f] <==
	{"level":"info","ts":"2024-12-05T21:46:59.696307Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.72.8:2380"}
	{"level":"info","ts":"2024-12-05T21:46:59.696354Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.72.8:2380"}
	{"level":"info","ts":"2024-12-05T21:47:00.139629Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 is starting a new election at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139680Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139703Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 received MsgPreVoteResp from 83b24b436960d93 at term 1"}
	{"level":"info","ts":"2024-12-05T21:47:00.139715Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became candidate at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 received MsgVoteResp from 83b24b436960d93 at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139728Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"83b24b436960d93 became leader at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.139735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 83b24b436960d93 elected leader 83b24b436960d93 at term 2"}
	{"level":"info","ts":"2024-12-05T21:47:00.143348Z","caller":"etcdserver/server.go:2629","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.145853Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"83b24b436960d93","local-member-attributes":"{Name:embed-certs-425614 ClientURLs:[https://192.168.72.8:2379]}","request-path":"/0/members/83b24b436960d93/attributes","cluster-id":"f6e6242805c6c4ee","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-05T21:47:00.146661Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"f6e6242805c6c4ee","local-member-id":"83b24b436960d93","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146747Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146779Z","caller":"etcdserver/server.go:2653","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-05T21:47:00.146791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:47:00.147045Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-05T21:47:00.147764Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:47:00.148457Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.8:2379"}
	{"level":"info","ts":"2024-12-05T21:47:00.149124Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-05T21:47:00.149813Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-05T21:47:00.151007Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-05T21:47:00.151041Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-05T21:57:00.181864Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":690}
	{"level":"info","ts":"2024-12-05T21:57:00.189327Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":690,"took":"7.176691ms","hash":1005205012,"current-db-size-bytes":2138112,"current-db-size":"2.1 MB","current-db-size-in-use-bytes":2138112,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2024-12-05T21:57:00.189396Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1005205012,"revision":690,"compact-revision":-1}
	
	
	==> kernel <==
	 22:01:59 up 20 min,  0 users,  load average: 0.04, 0.07, 0.10
	Linux embed-certs-425614 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [2a0af9adef57f334125b35cd694b87a1fe5e76704564e10bef0b6dc9d19525d4] <==
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:57:02.674898       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:57:02.674989       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:57:02.675944       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:57:02.677088       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 21:58:02.676980       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:58:02.677075       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 21:58:02.677197       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 21:58:02.677234       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 21:58:02.678407       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 21:58:02.678539       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1205 22:00:02.679665       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:00:02.679850       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1205 22:00:02.679730       1 handler_proxy.go:99] no RequestInfo found in the context
	E1205 22:00:02.679970       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1205 22:00:02.681104       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1205 22:00:02.681170       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [72477542a2b887669bbc8e9749525c5bf5e67ff8bd20e9cabfdc38818d16722c] <==
	W1205 21:46:53.886503       1 logging.go:55] [core] [Channel #166 SubChannel #167]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.902538       1 logging.go:55] [core] [Channel #2 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.913401       1 logging.go:55] [core] [Channel #157 SubChannel #158]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.928316       1 logging.go:55] [core] [Channel #58 SubChannel #59]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:53.961743       1 logging.go:55] [core] [Channel #37 SubChannel #38]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.025967       1 logging.go:55] [core] [Channel #79 SubChannel #80]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.030621       1 logging.go:55] [core] [Channel #109 SubChannel #110]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.077222       1 logging.go:55] [core] [Channel #25 SubChannel #26]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.098367       1 logging.go:55] [core] [Channel #154 SubChannel #155]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.104166       1 logging.go:55] [core] [Channel #31 SubChannel #32]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.173181       1 logging.go:55] [core] [Channel #181 SubChannel #182]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.186877       1 logging.go:55] [core] [Channel #130 SubChannel #131]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.433184       1 logging.go:55] [core] [Channel #124 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.467719       1 logging.go:55] [core] [Channel #17 SubChannel #18]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.479538       1 logging.go:55] [core] [Channel #112 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.491303       1 logging.go:55] [core] [Channel #175 SubChannel #176]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.512749       1 logging.go:55] [core] [Channel #178 SubChannel #179]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.549357       1 logging.go:55] [core] [Channel #148 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.621658       1 logging.go:55] [core] [Channel #133 SubChannel #134]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.667482       1 logging.go:55] [core] [Channel #115 SubChannel #116]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.686908       1 logging.go:55] [core] [Channel #10 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.726321       1 logging.go:55] [core] [Channel #103 SubChannel #104]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.752648       1 logging.go:55] [core] [Channel #1 SubChannel #3]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:54.851865       1 logging.go:55] [core] [Channel #169 SubChannel #170]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W1205 21:46:55.036630       1 logging.go:55] [core] [Channel #28 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0105f87b7ed061b072823f148117f5b3ca9b284b1a9815c2d5cf833cc959fffd] <==
	E1205 21:56:38.726322       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:56:39.206268       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:57:08.732722       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:57:09.214657       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:57:25.788387       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="embed-certs-425614"
	E1205 21:57:38.738234       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:57:39.222714       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:58:02.328348       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="62.942µs"
	E1205 21:58:08.744854       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:09.231233       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1205 21:58:16.332812       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-6867b74b74" duration="72.25µs"
	E1205 21:58:38.750627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:58:39.239710       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:08.756087       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:09.247369       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 21:59:38.761875       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 21:59:39.253993       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:08.768898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:09.264418       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:00:38.776683       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:00:39.272192       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:08.782859       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:09.280943       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1205 22:01:38.788424       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1205 22:01:39.288276       1 garbagecollector.go:826] "failed to discover some groups" logger="garbage-collector-controller" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [78e33cb0841af14ba2bbaf16ba26cb9d3ecf6825955a4816313ab7daa623c61e] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E1205 21:47:11.556250       1 proxier.go:734] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I1205 21:47:11.565326       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.72.8"]
	E1205 21:47:11.565467       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1205 21:47:11.600466       1 server_linux.go:146] "No iptables support for family" ipFamily="IPv6"
	I1205 21:47:11.600646       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1205 21:47:11.600693       1 server_linux.go:169] "Using iptables Proxier"
	I1205 21:47:11.610275       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1205 21:47:11.610666       1 server.go:483] "Version info" version="v1.31.2"
	I1205 21:47:11.611027       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1205 21:47:11.612589       1 config.go:199] "Starting service config controller"
	I1205 21:47:11.612664       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1205 21:47:11.612724       1 config.go:105] "Starting endpoint slice config controller"
	I1205 21:47:11.612750       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1205 21:47:11.613317       1 config.go:328] "Starting node config controller"
	I1205 21:47:11.614331       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1205 21:47:11.713168       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1205 21:47:11.713279       1 shared_informer.go:320] Caches are synced for service config
	I1205 21:47:11.714885       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [90f378c3660c9d99fe421beb7a403fcb8b709aa8b93115e17a977a31d0423705] <==
	W1205 21:47:01.752817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:47:01.752843       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.686867       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1205 21:47:02.686904       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.797936       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1205 21:47:02.798003       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.838885       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1205 21:47:02.838985       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.855940       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1205 21:47:02.856027       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.883517       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:02.883627       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.896109       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1205 21:47:02.896230       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.921764       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:02.921849       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.950748       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1205 21:47:02.950803       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.951821       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1205 21:47:02.951865       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1205 21:47:02.984709       1 reflector.go:561] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1205 21:47:02.984875       1 reflector.go:158] "Unhandled Error" err="runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W1205 21:47:03.023796       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1205 21:47:03.024836       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	I1205 21:47:04.735019       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 05 22:00:54 embed-certs-425614 kubelet[2923]: E1205 22:00:54.547390    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436054546640410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:00:54 embed-certs-425614 kubelet[2923]: E1205 22:00:54.547456    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436054546640410,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:00:57 embed-certs-425614 kubelet[2923]: E1205 22:00:57.312760    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]: E1205 22:01:04.336082    2923 iptables.go:577] "Could not set up iptables canary" err=<
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]: E1205 22:01:04.549701    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436064549194964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:04 embed-certs-425614 kubelet[2923]: E1205 22:01:04.549739    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436064549194964,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:11 embed-certs-425614 kubelet[2923]: E1205 22:01:11.313291    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 22:01:14 embed-certs-425614 kubelet[2923]: E1205 22:01:14.551579    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436074551085670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:14 embed-certs-425614 kubelet[2923]: E1205 22:01:14.551953    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436074551085670,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:24 embed-certs-425614 kubelet[2923]: E1205 22:01:24.314824    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 22:01:24 embed-certs-425614 kubelet[2923]: E1205 22:01:24.554383    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436084553887072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:24 embed-certs-425614 kubelet[2923]: E1205 22:01:24.554458    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436084553887072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:34 embed-certs-425614 kubelet[2923]: E1205 22:01:34.557182    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436094556601686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:34 embed-certs-425614 kubelet[2923]: E1205 22:01:34.557226    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436094556601686,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:36 embed-certs-425614 kubelet[2923]: E1205 22:01:36.314231    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 22:01:44 embed-certs-425614 kubelet[2923]: E1205 22:01:44.559459    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436104559127102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:44 embed-certs-425614 kubelet[2923]: E1205 22:01:44.559919    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436104559127102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:48 embed-certs-425614 kubelet[2923]: E1205 22:01:48.313475    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	Dec 05 22:01:54 embed-certs-425614 kubelet[2923]: E1205 22:01:54.561388    2923 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436114560957358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:54 embed-certs-425614 kubelet[2923]: E1205 22:01:54.562025    2923 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436114560957358,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134621,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 05 22:01:59 embed-certs-425614 kubelet[2923]: E1205 22:01:59.313301    2923 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-6867b74b74-hghhs" podUID="bc00b855-1cc8-45a1-92cb-b459ef0b40eb"
	
	
	==> storage-provisioner [bb7935ff19768951bf15ec6f7ae569dddffdb88edb10c621ccb419c29779746f] <==
	I1205 21:47:11.295822       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1205 21:47:11.334199       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1205 21:47:11.334440       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1205 21:47:11.371864       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1205 21:47:11.373668       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94!
	I1205 21:47:11.376537       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"427b20b9-9f21-41d8-9d42-0a1360548170", APIVersion:"v1", ResourceVersion:"398", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94 became leader
	I1205 21:47:11.479882       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-425614_1cd02672-3aed-4fac-a4cc-aba9ed42fb94!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-425614 -n embed-certs-425614
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-425614 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-6867b74b74-hghhs
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs: exit status 1 (66.030532ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-6867b74b74-hghhs" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-425614 describe pod metrics-server-6867b74b74-hghhs: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (336.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (162.28s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 21:59:15.166934  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 22:00:47.609394  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 22:01:10.573468  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 22:01:19.398801  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
E1205 22:01:29.761352  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.61.123:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.61.123:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (253.660819ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-601806" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-601806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-601806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.424µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-601806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (252.514965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-601806 logs -n 25: (1.677652763s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:32 UTC | 05 Dec 24 21:33 UTC |
	|         | /lib/systemd/system/containerd.service                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo cat                              | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/containerd/config.toml                            |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | containerd config dump                                 |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl status crio --all                            |                              |         |         |                     |                     |
	|         | --full --no-pager                                      |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo                                  | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | systemctl cat crio --no-pager                          |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo find                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | /etc/crio -type f -exec sh -c                          |                              |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                   |                              |         |         |                     |                     |
	| ssh     | -p bridge-279893 sudo crio                             | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | config                                                 |                              |         |         |                     |                     |
	| delete  | -p bridge-279893                                       | bridge-279893                | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:34 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-425614            | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC | 05 Dec 24 21:33 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:33 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-500648             | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-751353  | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC | 05 Dec 24 21:34 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:34 UTC |                     |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-425614                 | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:35 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-425614                                  | embed-certs-425614           | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-601806        | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-500648                  | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-751353       | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-500648                                   | no-preload-500648            | jenkins | v1.34.0 | 05 Dec 24 21:36 UTC | 05 Dec 24 21:47 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-751353 | jenkins | v1.34.0 | 05 Dec 24 21:37 UTC | 05 Dec 24 21:46 UTC |
	|         | default-k8s-diff-port-751353                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-601806             | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC | 05 Dec 24 21:38 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-601806                              | old-k8s-version-601806       | jenkins | v1.34.0 | 05 Dec 24 21:38 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 21:38:15
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 21:38:15.563725  358357 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:38:15.563882  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.563898  358357 out.go:358] Setting ErrFile to fd 2...
	I1205 21:38:15.563903  358357 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:38:15.564128  358357 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:38:15.564728  358357 out.go:352] Setting JSON to false
	I1205 21:38:15.565806  358357 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":15644,"bootTime":1733419052,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:38:15.565873  358357 start.go:139] virtualization: kvm guest
	I1205 21:38:15.568026  358357 out.go:177] * [old-k8s-version-601806] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:38:15.569552  358357 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:38:15.569581  358357 notify.go:220] Checking for updates...
	I1205 21:38:15.572033  358357 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:38:15.573317  358357 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:38:15.574664  358357 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:38:15.576173  358357 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:38:15.577543  358357 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:38:15.579554  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:38:15.580169  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.580230  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.596741  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40465
	I1205 21:38:15.597295  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.598015  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.598046  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.598475  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.598711  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.600576  358357 out.go:177] * Kubernetes 1.31.2 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.31.2
	I1205 21:38:15.602043  358357 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:38:15.602381  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:38:15.602484  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:38:15.618162  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36049
	I1205 21:38:15.618929  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:38:15.620894  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:38:15.620922  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:38:15.621462  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:38:15.621705  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:38:15.660038  358357 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 21:38:15.661273  358357 start.go:297] selected driver: kvm2
	I1205 21:38:15.661287  358357 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.661413  358357 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:38:15.662304  358357 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.662396  358357 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 21:38:15.678948  358357 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 21:38:15.679372  358357 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:38:15.679406  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:38:15.679443  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:38:15.679479  358357 start.go:340] cluster config:
	{Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:38:15.679592  358357 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 21:38:15.681409  358357 out.go:177] * Starting "old-k8s-version-601806" primary control-plane node in "old-k8s-version-601806" cluster
	I1205 21:38:12.362239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.434192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:15.682585  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:38:15.682646  358357 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 21:38:15.682657  358357 cache.go:56] Caching tarball of preloaded images
	I1205 21:38:15.682742  358357 preload.go:172] Found /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1205 21:38:15.682752  358357 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1205 21:38:15.682873  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:38:15.683066  358357 start.go:360] acquireMachinesLock for old-k8s-version-601806: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:38:21.514200  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:24.586255  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:30.666205  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:33.738246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:39.818259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:42.890268  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:48.970246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:52.042258  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:38:58.122192  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:01.194261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:07.274293  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:10.346237  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:16.426260  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:19.498251  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:25.578215  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:28.650182  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:34.730233  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:37.802242  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:43.882204  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:46.954259  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:53.034221  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:39:56.106303  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:02.186236  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:05.258270  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:11.338291  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:14.410261  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:20.490214  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:23.562239  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:29.642246  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:32.714183  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:38.794265  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:41.866189  357296 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.8:22: connect: no route to host
	I1205 21:40:44.870871  357831 start.go:364] duration metric: took 3m51.861097835s to acquireMachinesLock for "no-preload-500648"
	I1205 21:40:44.870962  357831 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:40:44.870974  357831 fix.go:54] fixHost starting: 
	I1205 21:40:44.871374  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:40:44.871425  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:40:44.889484  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34557
	I1205 21:40:44.890105  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:40:44.890780  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:40:44.890815  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:40:44.891254  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:40:44.891517  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:40:44.891744  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:40:44.893857  357831 fix.go:112] recreateIfNeeded on no-preload-500648: state=Stopped err=<nil>
	I1205 21:40:44.893927  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	W1205 21:40:44.894116  357831 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:40:44.897039  357831 out.go:177] * Restarting existing kvm2 VM for "no-preload-500648" ...
	I1205 21:40:44.868152  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:40:44.868210  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868588  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:40:44.868618  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:40:44.868823  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:40:44.870659  357296 machine.go:96] duration metric: took 4m37.397267419s to provisionDockerMachine
	I1205 21:40:44.870718  357296 fix.go:56] duration metric: took 4m37.422503321s for fixHost
	I1205 21:40:44.870724  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 4m37.422523792s
	W1205 21:40:44.870750  357296 start.go:714] error starting host: provision: host is not running
	W1205 21:40:44.870880  357296 out.go:270] ! StartHost failed, but will try again: provision: host is not running
	I1205 21:40:44.870891  357296 start.go:729] Will try again in 5 seconds ...
	I1205 21:40:44.898504  357831 main.go:141] libmachine: (no-preload-500648) Calling .Start
	I1205 21:40:44.898749  357831 main.go:141] libmachine: (no-preload-500648) Ensuring networks are active...
	I1205 21:40:44.899604  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network default is active
	I1205 21:40:44.899998  357831 main.go:141] libmachine: (no-preload-500648) Ensuring network mk-no-preload-500648 is active
	I1205 21:40:44.900472  357831 main.go:141] libmachine: (no-preload-500648) Getting domain xml...
	I1205 21:40:44.901210  357831 main.go:141] libmachine: (no-preload-500648) Creating domain...
	I1205 21:40:46.138820  357831 main.go:141] libmachine: (no-preload-500648) Waiting to get IP...
	I1205 21:40:46.139714  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.140107  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.140214  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.140113  358875 retry.go:31] will retry after 297.599003ms: waiting for machine to come up
	I1205 21:40:46.439848  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.440360  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.440421  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.440242  358875 retry.go:31] will retry after 243.531701ms: waiting for machine to come up
	I1205 21:40:46.685793  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:46.686251  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:46.686282  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:46.686199  358875 retry.go:31] will retry after 395.19149ms: waiting for machine to come up
	I1205 21:40:47.082735  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.083192  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.083216  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.083150  358875 retry.go:31] will retry after 591.156988ms: waiting for machine to come up
	I1205 21:40:47.675935  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:47.676381  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:47.676414  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:47.676308  358875 retry.go:31] will retry after 706.616299ms: waiting for machine to come up
	I1205 21:40:49.872843  357296 start.go:360] acquireMachinesLock for embed-certs-425614: {Name:mka8f82518aff901a26e4e81a48783d4e01b4161 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1205 21:40:48.384278  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:48.384666  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:48.384696  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:48.384611  358875 retry.go:31] will retry after 859.724415ms: waiting for machine to come up
	I1205 21:40:49.245895  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:49.246294  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:49.246323  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:49.246239  358875 retry.go:31] will retry after 915.790977ms: waiting for machine to come up
	I1205 21:40:50.164042  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:50.164570  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:50.164600  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:50.164514  358875 retry.go:31] will retry after 1.283530276s: waiting for machine to come up
	I1205 21:40:51.450256  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:51.450664  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:51.450692  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:51.450595  358875 retry.go:31] will retry after 1.347371269s: waiting for machine to come up
	I1205 21:40:52.800263  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:52.800702  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:52.800732  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:52.800637  358875 retry.go:31] will retry after 1.982593955s: waiting for machine to come up
	I1205 21:40:54.785977  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:54.786644  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:54.786705  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:54.786525  358875 retry.go:31] will retry after 2.41669899s: waiting for machine to come up
	I1205 21:40:57.205989  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:40:57.206403  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:40:57.206428  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:40:57.206335  358875 retry.go:31] will retry after 2.992148692s: waiting for machine to come up
	I1205 21:41:00.200589  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:00.201093  357831 main.go:141] libmachine: (no-preload-500648) DBG | unable to find current IP address of domain no-preload-500648 in network mk-no-preload-500648
	I1205 21:41:00.201139  357831 main.go:141] libmachine: (no-preload-500648) DBG | I1205 21:41:00.201028  358875 retry.go:31] will retry after 3.716252757s: waiting for machine to come up
	I1205 21:41:05.171227  357912 start.go:364] duration metric: took 4m4.735770407s to acquireMachinesLock for "default-k8s-diff-port-751353"
	I1205 21:41:05.171353  357912 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:05.171382  357912 fix.go:54] fixHost starting: 
	I1205 21:41:05.172206  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:05.172294  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:05.190413  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I1205 21:41:05.190911  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:05.191473  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:05.191497  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:05.191841  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:05.192052  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:05.192199  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:05.193839  357912 fix.go:112] recreateIfNeeded on default-k8s-diff-port-751353: state=Stopped err=<nil>
	I1205 21:41:05.193867  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	W1205 21:41:05.194042  357912 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:05.196358  357912 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-751353" ...
	I1205 21:41:05.197683  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Start
	I1205 21:41:05.197958  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring networks are active...
	I1205 21:41:05.198819  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network default is active
	I1205 21:41:05.199225  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Ensuring network mk-default-k8s-diff-port-751353 is active
	I1205 21:41:05.199740  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Getting domain xml...
	I1205 21:41:05.200544  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Creating domain...
	I1205 21:41:03.922338  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.922889  357831 main.go:141] libmachine: (no-preload-500648) Found IP for machine: 192.168.50.141
	I1205 21:41:03.922911  357831 main.go:141] libmachine: (no-preload-500648) Reserving static IP address...
	I1205 21:41:03.922924  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has current primary IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.923476  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.923500  357831 main.go:141] libmachine: (no-preload-500648) DBG | skip adding static IP to network mk-no-preload-500648 - found existing host DHCP lease matching {name: "no-preload-500648", mac: "52:54:00:98:f0:c5", ip: "192.168.50.141"}
	I1205 21:41:03.923514  357831 main.go:141] libmachine: (no-preload-500648) DBG | Getting to WaitForSSH function...
	I1205 21:41:03.923583  357831 main.go:141] libmachine: (no-preload-500648) Reserved static IP address: 192.168.50.141
	I1205 21:41:03.923617  357831 main.go:141] libmachine: (no-preload-500648) Waiting for SSH to be available...
	I1205 21:41:03.926008  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926299  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:03.926327  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:03.926443  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH client type: external
	I1205 21:41:03.926467  357831 main.go:141] libmachine: (no-preload-500648) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa (-rw-------)
	I1205 21:41:03.926542  357831 main.go:141] libmachine: (no-preload-500648) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.141 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:03.926559  357831 main.go:141] libmachine: (no-preload-500648) DBG | About to run SSH command:
	I1205 21:41:03.926582  357831 main.go:141] libmachine: (no-preload-500648) DBG | exit 0
	I1205 21:41:04.054310  357831 main.go:141] libmachine: (no-preload-500648) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:04.054735  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetConfigRaw
	I1205 21:41:04.055421  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.058393  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.058823  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.058857  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.059115  357831 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/config.json ...
	I1205 21:41:04.059357  357831 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:04.059381  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.059624  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.061812  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062139  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.062169  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.062325  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.062530  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.062811  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.062947  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.063206  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.063219  357831 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:04.174592  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:04.174635  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.174947  357831 buildroot.go:166] provisioning hostname "no-preload-500648"
	I1205 21:41:04.174982  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.175220  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.178267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178732  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.178766  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.178975  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.179191  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179356  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.179518  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.179683  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.179864  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.179878  357831 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-500648 && echo "no-preload-500648" | sudo tee /etc/hostname
	I1205 21:41:04.304650  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-500648
	
	I1205 21:41:04.304688  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.307897  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308212  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.308255  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.308441  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.308703  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308864  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.308994  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.309273  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.309538  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.309570  357831 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-500648' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-500648/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-500648' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:04.432111  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:04.432158  357831 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:04.432186  357831 buildroot.go:174] setting up certificates
	I1205 21:41:04.432198  357831 provision.go:84] configureAuth start
	I1205 21:41:04.432214  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetMachineName
	I1205 21:41:04.432569  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:04.435826  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436298  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.436348  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.436535  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.439004  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439384  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.439412  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.439632  357831 provision.go:143] copyHostCerts
	I1205 21:41:04.439708  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:04.439736  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:04.439826  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:04.439951  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:04.439963  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:04.440006  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:04.440090  357831 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:04.440100  357831 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:04.440133  357831 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:04.440206  357831 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.no-preload-500648 san=[127.0.0.1 192.168.50.141 localhost minikube no-preload-500648]
	I1205 21:41:04.514253  357831 provision.go:177] copyRemoteCerts
	I1205 21:41:04.514330  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:04.514372  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.517413  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.517811  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.517845  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.518067  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.518361  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.518597  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.518773  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:04.611530  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:04.637201  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1205 21:41:04.661934  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:04.686618  357831 provision.go:87] duration metric: took 254.404192ms to configureAuth
	I1205 21:41:04.686654  357831 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:04.686834  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:04.686921  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.690232  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690677  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.690709  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.690907  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.691145  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691456  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.691605  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.691811  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:04.692003  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:04.692020  357831 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:04.922195  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:04.922228  357831 machine.go:96] duration metric: took 862.853823ms to provisionDockerMachine
	I1205 21:41:04.922245  357831 start.go:293] postStartSetup for "no-preload-500648" (driver="kvm2")
	I1205 21:41:04.922275  357831 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:04.922296  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:04.922662  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:04.922698  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:04.925928  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926441  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:04.926474  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:04.926628  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:04.926810  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:04.926928  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:04.927024  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.013131  357831 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:05.017518  357831 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:05.017552  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:05.017635  357831 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:05.017713  357831 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:05.017814  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:05.027935  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:05.052403  357831 start.go:296] duration metric: took 130.117347ms for postStartSetup
	I1205 21:41:05.052469  357831 fix.go:56] duration metric: took 20.181495969s for fixHost
	I1205 21:41:05.052493  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.055902  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056329  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.056381  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.056574  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.056832  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.056993  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.057144  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.057327  357831 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:05.057534  357831 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.50.141 22 <nil> <nil>}
	I1205 21:41:05.057548  357831 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:05.171012  357831 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434865.146406477
	
	I1205 21:41:05.171041  357831 fix.go:216] guest clock: 1733434865.146406477
	I1205 21:41:05.171051  357831 fix.go:229] Guest: 2024-12-05 21:41:05.146406477 +0000 UTC Remote: 2024-12-05 21:41:05.052473548 +0000 UTC m=+252.199777630 (delta=93.932929ms)
	I1205 21:41:05.171075  357831 fix.go:200] guest clock delta is within tolerance: 93.932929ms
	I1205 21:41:05.171087  357831 start.go:83] releasing machines lock for "no-preload-500648", held for 20.300173371s
	I1205 21:41:05.171115  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.171462  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:05.174267  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174716  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.174747  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.174893  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175500  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175738  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:41:05.175856  357831 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:05.175910  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.176016  357831 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:05.176053  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:41:05.179122  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179281  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179567  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179595  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179620  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:05.179637  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:05.179785  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.179924  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:41:05.180016  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180163  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:41:05.180167  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180365  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.180376  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:41:05.180564  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:41:05.286502  357831 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:05.292793  357831 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:05.436742  357831 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:05.442389  357831 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:05.442473  357831 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:05.460161  357831 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:05.460198  357831 start.go:495] detecting cgroup driver to use...
	I1205 21:41:05.460287  357831 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:05.476989  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:05.490676  357831 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:05.490747  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:05.504437  357831 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:05.518314  357831 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:05.649582  357831 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:05.831575  357831 docker.go:233] disabling docker service ...
	I1205 21:41:05.831650  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:05.851482  357831 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:05.865266  357831 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:05.981194  357831 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:06.107386  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:06.125290  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:06.143817  357831 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:06.143919  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.154167  357831 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:06.154259  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.165640  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.177412  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.190668  357831 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:06.201712  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.213455  357831 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.232565  357831 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:06.243746  357831 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:06.253809  357831 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:06.253878  357831 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:06.267573  357831 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:06.278706  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:06.408370  357831 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:06.511878  357831 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:06.511959  357831 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:06.519295  357831 start.go:563] Will wait 60s for crictl version
	I1205 21:41:06.519366  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.523477  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:06.562056  357831 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:06.562151  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.595493  357831 ssh_runner.go:195] Run: crio --version
	I1205 21:41:06.630320  357831 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:06.631796  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetIP
	I1205 21:41:06.634988  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635416  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:41:06.635453  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:41:06.635693  357831 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:06.639948  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:06.653650  357831 kubeadm.go:883] updating cluster {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:f
alse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:06.653798  357831 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:06.653869  357831 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:06.695865  357831 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:06.695900  357831 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.31.2 registry.k8s.io/kube-controller-manager:v1.31.2 registry.k8s.io/kube-scheduler:v1.31.2 registry.k8s.io/kube-proxy:v1.31.2 registry.k8s.io/pause:3.10 registry.k8s.io/etcd:3.5.15-0 registry.k8s.io/coredns/coredns:v1.11.3 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:06.695957  357831 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.695970  357831 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.696005  357831 image.go:135] retrieving image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.696049  357831 image.go:135] retrieving image: registry.k8s.io/pause:3.10
	I1205 21:41:06.696060  357831 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.696087  357831 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.696061  357831 image.go:135] retrieving image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.696462  357831 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.697982  357831 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.698019  357831 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.698016  357831 image.go:178] daemon lookup for registry.k8s.io/etcd:3.5.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.697992  357831 image.go:178] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.3: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.698111  357831 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.698133  357831 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.31.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.698286  357831 image.go:178] daemon lookup for registry.k8s.io/pause:3.10: Error response from daemon: No such image: registry.k8s.io/pause:3.10
	I1205 21:41:06.698501  357831 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.856605  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.856650  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.869847  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.872242  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.874561  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:06.907303  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:06.920063  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10
	I1205 21:41:06.925542  357831 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.31.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.31.2" does not exist at hash "0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503" in container runtime
	I1205 21:41:06.925592  357831 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:06.925656  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.959677  357831 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.31.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.31.2" does not exist at hash "9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173" in container runtime
	I1205 21:41:06.959738  357831 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:06.959799  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.984175  357831 cache_images.go:116] "registry.k8s.io/etcd:3.5.15-0" needs transfer: "registry.k8s.io/etcd:3.5.15-0" does not exist at hash "2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" in container runtime
	I1205 21:41:06.984219  357831 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:06.984267  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:06.995251  357831 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.3" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.3" does not exist at hash "c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" in container runtime
	I1205 21:41:06.995393  357831 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:06.995547  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.017878  357831 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.31.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.31.2" does not exist at hash "847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856" in container runtime
	I1205 21:41:07.017952  357831 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.018014  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.027087  357831 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.31.2" needs transfer: "registry.k8s.io/kube-proxy:v1.31.2" does not exist at hash "505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38" in container runtime
	I1205 21:41:07.027151  357831 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.027206  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:07.138510  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.138629  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.138509  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.138696  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.138577  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.138579  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.260832  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.269638  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.269766  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.269837  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.276535  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.276611  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.344944  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.15-0
	I1205 21:41:07.369612  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.31.2
	I1205 21:41:07.410660  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.31.2
	I1205 21:41:07.410709  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.31.2
	I1205 21:41:07.410815  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.3
	I1205 21:41:07.410817  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.31.2
	I1205 21:41:07.463332  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0
	I1205 21:41:07.463470  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.491657  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2
	I1205 21:41:07.491795  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:07.531121  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2
	I1205 21:41:07.531150  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2
	I1205 21:41:07.531256  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3
	I1205 21:41:07.531270  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:07.531292  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2
	I1205 21:41:07.531341  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:07.531342  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:07.531258  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:07.531400  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.15-0 (exists)
	I1205 21:41:07.531416  357831 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531452  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0
	I1205 21:41:07.531419  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.31.2 (exists)
	I1205 21:41:07.543194  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.31.2 (exists)
	I1205 21:41:07.543221  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.31.2 (exists)
	I1205 21:41:07.543329  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.31.2 (exists)
	I1205 21:41:07.545197  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.3 (exists)
	I1205 21:41:07.599581  357831 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:06.512338  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting to get IP...
	I1205 21:41:06.513323  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513795  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.513870  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.513764  359021 retry.go:31] will retry after 193.323182ms: waiting for machine to come up
	I1205 21:41:06.709218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:06.709667  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:06.709597  359021 retry.go:31] will retry after 359.664637ms: waiting for machine to come up
	I1205 21:41:07.071234  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071649  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.071677  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.071621  359021 retry.go:31] will retry after 315.296814ms: waiting for machine to come up
	I1205 21:41:07.388219  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.388788  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.388697  359021 retry.go:31] will retry after 607.823337ms: waiting for machine to come up
	I1205 21:41:07.998529  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.998987  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:07.999021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:07.998924  359021 retry.go:31] will retry after 603.533135ms: waiting for machine to come up
	I1205 21:41:08.603895  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604547  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:08.604592  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:08.604458  359021 retry.go:31] will retry after 584.642321ms: waiting for machine to come up
	I1205 21:41:09.190331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190835  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:09.190866  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:09.190778  359021 retry.go:31] will retry after 848.646132ms: waiting for machine to come up
	I1205 21:41:10.041037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041702  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:10.041734  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:10.041632  359021 retry.go:31] will retry after 1.229215485s: waiting for machine to come up
	I1205 21:41:11.124436  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.15-0: (3.592950613s)
	I1205 21:41:11.124474  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.15-0 from cache
	I1205 21:41:11.124504  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124501  357831 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5: (3.524878217s)
	I1205 21:41:11.124562  357831 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1205 21:41:11.124586  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2
	I1205 21:41:11.124617  357831 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:11.124667  357831 ssh_runner.go:195] Run: which crictl
	I1205 21:41:11.272549  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273204  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:11.273239  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:11.273141  359021 retry.go:31] will retry after 1.721028781s: waiting for machine to come up
	I1205 21:41:12.996546  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.996988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:12.997015  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:12.996932  359021 retry.go:31] will retry after 1.620428313s: waiting for machine to come up
	I1205 21:41:14.619426  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.619986  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:14.620021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:14.619928  359021 retry.go:31] will retry after 1.936504566s: waiting for machine to come up
	I1205 21:41:13.485236  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.31.2: (2.36061811s)
	I1205 21:41:13.485285  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.31.2 from cache
	I1205 21:41:13.485298  357831 ssh_runner.go:235] Completed: which crictl: (2.360608199s)
	I1205 21:41:13.485314  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:13.485383  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:13.485450  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2
	I1205 21:41:15.556836  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (2.071414459s)
	I1205 21:41:15.556906  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.31.2: (2.071416348s)
	I1205 21:41:15.556935  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:15.556939  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.31.2 from cache
	I1205 21:41:15.557031  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.557069  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2
	I1205 21:41:15.595094  357831 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:17.533984  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.31.2: (1.97688139s)
	I1205 21:41:17.534026  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.31.2 from cache
	I1205 21:41:17.534061  357831 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534168  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2
	I1205 21:41:17.534059  357831 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (1.938925021s)
	I1205 21:41:17.534239  357831 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1205 21:41:17.534355  357831 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:16.559037  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559676  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:16.559711  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:16.559616  359021 retry.go:31] will retry after 2.748634113s: waiting for machine to come up
	I1205 21:41:19.309762  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310292  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | unable to find current IP address of domain default-k8s-diff-port-751353 in network mk-default-k8s-diff-port-751353
	I1205 21:41:19.310325  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | I1205 21:41:19.310235  359021 retry.go:31] will retry after 4.490589015s: waiting for machine to come up
	I1205 21:41:18.991714  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.31.2: (1.45750646s)
	I1205 21:41:18.991760  357831 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: (1.457382547s)
	I1205 21:41:18.991769  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.31.2 from cache
	I1205 21:41:18.991788  357831 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I1205 21:41:18.991796  357831 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:18.991871  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1205 21:41:19.652114  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1205 21:41:19.652153  357831 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:19.652207  357831 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3
	I1205 21:41:21.430659  357831 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.3: (1.778424474s)
	I1205 21:41:21.430699  357831 cache_images.go:321] Transferred and loaded /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.3 from cache
	I1205 21:41:21.430728  357831 cache_images.go:123] Successfully loaded all cached images
	I1205 21:41:21.430737  357831 cache_images.go:92] duration metric: took 14.734820486s to LoadCachedImages
	I1205 21:41:21.430748  357831 kubeadm.go:934] updating node { 192.168.50.141 8443 v1.31.2 crio true true} ...
	I1205 21:41:21.430896  357831 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-500648 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.141
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:21.430974  357831 ssh_runner.go:195] Run: crio config
	I1205 21:41:21.485189  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:41:21.485211  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:21.485222  357831 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:21.485252  357831 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.141 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-500648 NodeName:no-preload-500648 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.141"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.141 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:21.485440  357831 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.141
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-500648"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.50.141"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.141"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:21.485525  357831 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:21.497109  357831 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:21.497191  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:21.506887  357831 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I1205 21:41:21.524456  357831 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:21.541166  357831 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2297 bytes)
	I1205 21:41:21.560513  357831 ssh_runner.go:195] Run: grep 192.168.50.141	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:21.564597  357831 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.141	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:21.576227  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:21.695424  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:21.712683  357831 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648 for IP: 192.168.50.141
	I1205 21:41:21.712711  357831 certs.go:194] generating shared ca certs ...
	I1205 21:41:21.712735  357831 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:21.712951  357831 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:21.713005  357831 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:21.713019  357831 certs.go:256] generating profile certs ...
	I1205 21:41:21.713143  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/client.key
	I1205 21:41:21.713264  357831 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key.832a65b0
	I1205 21:41:21.713335  357831 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key
	I1205 21:41:21.713643  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:21.713708  357831 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:21.713729  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:21.713774  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:21.713820  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:21.713856  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:21.713961  357831 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:21.714852  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:21.770708  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:21.813676  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:21.869550  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:21.898056  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1205 21:41:21.924076  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:21.950399  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:21.976765  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/no-preload-500648/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:22.003346  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:22.032363  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:22.071805  357831 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:22.096470  357831 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:22.113380  357831 ssh_runner.go:195] Run: openssl version
	I1205 21:41:22.119084  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:22.129657  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134070  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.134139  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:22.139838  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:22.150575  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:22.161366  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165685  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.165753  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:22.171788  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:22.182582  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:22.193460  357831 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197852  357831 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.197934  357831 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:22.203616  357831 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:22.215612  357831 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:22.220715  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:22.226952  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:22.233017  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:22.239118  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:22.245106  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:22.251085  357831 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:22.257047  357831 kubeadm.go:392] StartCluster: {Name:no-preload-500648 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:no-preload-500648 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fals
e MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:22.257152  357831 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:22.257201  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.294003  357831 cri.go:89] found id: ""
	I1205 21:41:22.294119  357831 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:22.304604  357831 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:22.304627  357831 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:22.304690  357831 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:22.314398  357831 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:22.315469  357831 kubeconfig.go:125] found "no-preload-500648" server: "https://192.168.50.141:8443"
	I1205 21:41:22.317845  357831 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:22.327468  357831 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.50.141
	I1205 21:41:22.327516  357831 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:22.327546  357831 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:22.327623  357831 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:22.360852  357831 cri.go:89] found id: ""
	I1205 21:41:22.360955  357831 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:22.378555  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:22.388502  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:22.388526  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:22.388614  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:22.397598  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:22.397664  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:22.407664  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:22.417114  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:22.417192  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:22.427221  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.436656  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:22.436731  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:22.446571  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:22.456048  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:22.456120  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:22.466146  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:22.476563  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:22.582506  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:25.151918  358357 start.go:364] duration metric: took 3m9.46879842s to acquireMachinesLock for "old-k8s-version-601806"
	I1205 21:41:25.151996  358357 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:25.152009  358357 fix.go:54] fixHost starting: 
	I1205 21:41:25.152489  358357 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:25.152557  358357 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:25.172080  358357 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36071
	I1205 21:41:25.172722  358357 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:25.173396  358357 main.go:141] libmachine: Using API Version  1
	I1205 21:41:25.173426  358357 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:25.173791  358357 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:25.174049  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:25.174226  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetState
	I1205 21:41:25.176109  358357 fix.go:112] recreateIfNeeded on old-k8s-version-601806: state=Stopped err=<nil>
	I1205 21:41:25.176156  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	W1205 21:41:25.176374  358357 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:25.178317  358357 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-601806" ...
	I1205 21:41:23.803088  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803582  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has current primary IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.803605  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Found IP for machine: 192.168.39.106
	I1205 21:41:23.803619  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserving static IP address...
	I1205 21:41:23.804049  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.804083  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Reserved static IP address: 192.168.39.106
	I1205 21:41:23.804103  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | skip adding static IP to network mk-default-k8s-diff-port-751353 - found existing host DHCP lease matching {name: "default-k8s-diff-port-751353", mac: "52:54:00:9a:bc:70", ip: "192.168.39.106"}
	I1205 21:41:23.804129  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Getting to WaitForSSH function...
	I1205 21:41:23.804158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Waiting for SSH to be available...
	I1205 21:41:23.806941  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.807372  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.807500  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH client type: external
	I1205 21:41:23.807527  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa (-rw-------)
	I1205 21:41:23.807597  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.106 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:23.807626  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | About to run SSH command:
	I1205 21:41:23.807645  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | exit 0
	I1205 21:41:23.938988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:23.939382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetConfigRaw
	I1205 21:41:23.940370  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:23.943944  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944399  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.944433  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.944788  357912 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/config.json ...
	I1205 21:41:23.945040  357912 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:23.945065  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:23.945331  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:23.948166  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948598  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:23.948633  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:23.948777  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:23.948980  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:23.949265  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:23.949425  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:23.949655  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:23.949669  357912 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:24.062400  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:24.062440  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062712  357912 buildroot.go:166] provisioning hostname "default-k8s-diff-port-751353"
	I1205 21:41:24.062742  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.062947  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.065557  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066077  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.066109  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.066235  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.066415  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066571  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.066751  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.066932  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.067122  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.067134  357912 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-751353 && echo "default-k8s-diff-port-751353" | sudo tee /etc/hostname
	I1205 21:41:24.190609  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-751353
	
	I1205 21:41:24.190662  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.193538  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.193946  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.193985  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.194231  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.194443  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194660  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.194909  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.195186  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.195396  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.195417  357912 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-751353' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-751353/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-751353' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:24.310725  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:24.310770  357912 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:24.310812  357912 buildroot.go:174] setting up certificates
	I1205 21:41:24.310829  357912 provision.go:84] configureAuth start
	I1205 21:41:24.310839  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetMachineName
	I1205 21:41:24.311138  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:24.314161  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314528  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.314552  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.314722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.316953  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317283  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.317324  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.317483  357912 provision.go:143] copyHostCerts
	I1205 21:41:24.317548  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:24.317571  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:24.317629  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:24.317723  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:24.317732  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:24.317753  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:24.317872  357912 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:24.317883  357912 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:24.317933  357912 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:24.318001  357912 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-751353 san=[127.0.0.1 192.168.39.106 default-k8s-diff-port-751353 localhost minikube]
	I1205 21:41:24.483065  357912 provision.go:177] copyRemoteCerts
	I1205 21:41:24.483137  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:24.483175  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.486663  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487074  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.487112  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.487277  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.487508  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.487726  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.487899  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.572469  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:24.597375  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1205 21:41:24.622122  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:24.649143  357912 provision.go:87] duration metric: took 338.295707ms to configureAuth
	I1205 21:41:24.649188  357912 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:24.649464  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:24.649609  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.652646  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653051  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.653101  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.653259  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.653492  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653689  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.653841  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.654054  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:24.654213  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:24.654235  357912 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:24.893672  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:24.893703  357912 machine.go:96] duration metric: took 948.646561ms to provisionDockerMachine
	I1205 21:41:24.893719  357912 start.go:293] postStartSetup for "default-k8s-diff-port-751353" (driver="kvm2")
	I1205 21:41:24.893733  357912 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:24.893755  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:24.894145  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:24.894185  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:24.897565  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.897988  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:24.898022  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:24.898262  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:24.898579  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:24.898840  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:24.899066  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:24.986299  357912 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:24.991211  357912 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:24.991251  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:24.991341  357912 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:24.991456  357912 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:24.991601  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:25.002264  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:25.031129  357912 start.go:296] duration metric: took 137.388294ms for postStartSetup
	I1205 21:41:25.031184  357912 fix.go:56] duration metric: took 19.859807882s for fixHost
	I1205 21:41:25.031214  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.034339  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.034678  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.034715  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.035027  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.035309  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035501  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.035655  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.035858  357912 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:25.036066  357912 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.39.106 22 <nil> <nil>}
	I1205 21:41:25.036081  357912 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:25.151697  357912 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434885.125327326
	
	I1205 21:41:25.151729  357912 fix.go:216] guest clock: 1733434885.125327326
	I1205 21:41:25.151741  357912 fix.go:229] Guest: 2024-12-05 21:41:25.125327326 +0000 UTC Remote: 2024-12-05 21:41:25.03119011 +0000 UTC m=+264.754619927 (delta=94.137216ms)
	I1205 21:41:25.151796  357912 fix.go:200] guest clock delta is within tolerance: 94.137216ms
	I1205 21:41:25.151807  357912 start.go:83] releasing machines lock for "default-k8s-diff-port-751353", held for 19.980496597s
	I1205 21:41:25.151845  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.152105  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:25.155285  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155698  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.155735  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.155871  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156424  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156613  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:25.156747  357912 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:25.156796  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.156844  357912 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:25.156876  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:25.159945  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160382  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160439  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160464  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160692  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:25.160722  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:25.160728  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160943  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:25.160957  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161100  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:25.161218  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161341  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:25.161370  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.161473  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:25.244449  357912 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:25.271151  357912 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:25.179884  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .Start
	I1205 21:41:25.180144  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring networks are active...
	I1205 21:41:25.181095  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network default is active
	I1205 21:41:25.181522  358357 main.go:141] libmachine: (old-k8s-version-601806) Ensuring network mk-old-k8s-version-601806 is active
	I1205 21:41:25.181972  358357 main.go:141] libmachine: (old-k8s-version-601806) Getting domain xml...
	I1205 21:41:25.182848  358357 main.go:141] libmachine: (old-k8s-version-601806) Creating domain...
	I1205 21:41:25.428417  357912 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:25.436849  357912 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:25.436929  357912 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:25.457952  357912 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:25.457989  357912 start.go:495] detecting cgroup driver to use...
	I1205 21:41:25.458073  357912 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:25.478406  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:25.497547  357912 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:25.497636  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:25.516564  357912 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:25.535753  357912 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:25.692182  357912 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:25.880739  357912 docker.go:233] disabling docker service ...
	I1205 21:41:25.880812  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:25.896490  357912 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:25.911107  357912 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:26.048384  357912 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:26.186026  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:26.200922  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:26.221768  357912 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:41:26.221848  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.232550  357912 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:26.232665  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.243173  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.254233  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.264888  357912 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:26.275876  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.286642  357912 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.311188  357912 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:26.322696  357912 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:26.332006  357912 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:26.332075  357912 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:26.345881  357912 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:26.362014  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:26.487972  357912 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:26.584162  357912 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:26.584275  357912 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:26.589290  357912 start.go:563] Will wait 60s for crictl version
	I1205 21:41:26.589379  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:41:26.593337  357912 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:26.629326  357912 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:26.629455  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.656684  357912 ssh_runner.go:195] Run: crio --version
	I1205 21:41:26.685571  357912 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:41:23.536422  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.749946  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.804210  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:23.887538  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:23.887671  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.387809  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.887821  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:24.905947  357831 api_server.go:72] duration metric: took 1.018402152s to wait for apiserver process to appear ...
	I1205 21:41:24.905979  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:24.906008  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:24.906658  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:25.406416  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:26.687438  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetIP
	I1205 21:41:26.690614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691032  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:26.691070  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:26.691314  357912 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:26.695524  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:26.708289  357912 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:26.708409  357912 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:41:26.708474  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:26.757258  357912 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:41:26.757363  357912 ssh_runner.go:195] Run: which lz4
	I1205 21:41:26.762809  357912 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:26.767369  357912 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:26.767411  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:41:28.161289  357912 crio.go:462] duration metric: took 1.398584393s to copy over tarball
	I1205 21:41:28.161397  357912 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:26.542343  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting to get IP...
	I1205 21:41:26.543246  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.543692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.543765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.543663  359172 retry.go:31] will retry after 193.087452ms: waiting for machine to come up
	I1205 21:41:26.738243  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:26.738682  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:26.738713  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:26.738634  359172 retry.go:31] will retry after 347.304831ms: waiting for machine to come up
	I1205 21:41:27.088372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.088982  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.089018  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.088880  359172 retry.go:31] will retry after 416.785806ms: waiting for machine to come up
	I1205 21:41:27.507765  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.508291  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.508320  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.508250  359172 retry.go:31] will retry after 407.585006ms: waiting for machine to come up
	I1205 21:41:27.918225  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:27.918900  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:27.918930  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:27.918844  359172 retry.go:31] will retry after 612.014901ms: waiting for machine to come up
	I1205 21:41:28.532179  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:28.532625  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:28.532658  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:28.532561  359172 retry.go:31] will retry after 784.813224ms: waiting for machine to come up
	I1205 21:41:29.318697  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:29.319199  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:29.319234  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:29.319136  359172 retry.go:31] will retry after 827.384433ms: waiting for machine to come up
	I1205 21:41:30.148284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:30.148684  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:30.148711  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:30.148642  359172 retry.go:31] will retry after 1.314535235s: waiting for machine to come up
	I1205 21:41:30.406823  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:30.406896  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:30.321824  357912 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.16037347s)
	I1205 21:41:30.321868  357912 crio.go:469] duration metric: took 2.160535841s to extract the tarball
	I1205 21:41:30.321879  357912 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:30.358990  357912 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:30.401957  357912 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:41:30.401988  357912 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:41:30.402000  357912 kubeadm.go:934] updating node { 192.168.39.106 8444 v1.31.2 crio true true} ...
	I1205 21:41:30.402143  357912 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-751353 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.106
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:30.402242  357912 ssh_runner.go:195] Run: crio config
	I1205 21:41:30.452788  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:30.452819  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:30.452832  357912 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:30.452864  357912 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.106 APIServerPort:8444 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-751353 NodeName:default-k8s-diff-port-751353 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.106"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.106 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:41:30.453016  357912 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.106
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-751353"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.39.106"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.106"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:30.453081  357912 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:41:30.463027  357912 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:30.463098  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:30.472345  357912 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1205 21:41:30.489050  357912 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:30.505872  357912 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2308 bytes)
	I1205 21:41:30.523157  357912 ssh_runner.go:195] Run: grep 192.168.39.106	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:30.527012  357912 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.106	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:30.538965  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:30.668866  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:30.686150  357912 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353 for IP: 192.168.39.106
	I1205 21:41:30.686187  357912 certs.go:194] generating shared ca certs ...
	I1205 21:41:30.686218  357912 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:30.686416  357912 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:30.686483  357912 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:30.686499  357912 certs.go:256] generating profile certs ...
	I1205 21:41:30.686629  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/client.key
	I1205 21:41:30.686701  357912 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key.ec661d8c
	I1205 21:41:30.686738  357912 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key
	I1205 21:41:30.686861  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:30.686890  357912 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:30.686898  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:30.686921  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:30.686942  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:30.686979  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:30.687017  357912 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:30.687858  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:30.732722  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:30.762557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:30.797976  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:30.825854  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1205 21:41:30.863220  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:41:30.887018  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:30.913503  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/default-k8s-diff-port-751353/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1205 21:41:30.940557  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:30.965468  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:30.991147  357912 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:31.016782  357912 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:31.036286  357912 ssh_runner.go:195] Run: openssl version
	I1205 21:41:31.042388  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:31.053011  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057796  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.057880  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:31.064075  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:31.076633  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:31.089138  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093653  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.093733  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:31.099403  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:31.111902  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:31.122743  357912 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127551  357912 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.127666  357912 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:31.133373  357912 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:31.143934  357912 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:31.148739  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:31.154995  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:31.161288  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:31.167555  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:31.173476  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:31.179371  357912 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:31.185238  357912 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-751353 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.31.2 ClusterName:default-k8s-diff-port-751353 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration
:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:31.185381  357912 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:31.185440  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.221359  357912 cri.go:89] found id: ""
	I1205 21:41:31.221448  357912 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:31.231975  357912 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:31.231997  357912 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:31.232043  357912 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:31.241662  357912 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:31.242685  357912 kubeconfig.go:125] found "default-k8s-diff-port-751353" server: "https://192.168.39.106:8444"
	I1205 21:41:31.244889  357912 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:31.254747  357912 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.39.106
	I1205 21:41:31.254798  357912 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:31.254815  357912 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:31.254884  357912 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:31.291980  357912 cri.go:89] found id: ""
	I1205 21:41:31.292075  357912 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:31.312332  357912 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:31.322240  357912 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:31.322267  357912 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:31.322323  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1205 21:41:31.331374  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:31.331462  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:31.340916  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1205 21:41:31.350121  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:31.350209  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:31.361302  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.372251  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:31.372316  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:31.383250  357912 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1205 21:41:31.393771  357912 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:31.393830  357912 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:31.404949  357912 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:31.416349  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:31.518522  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.687862  357912 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.169290848s)
	I1205 21:41:32.687902  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:32.918041  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.001916  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:33.088916  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:33.089029  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:33.589452  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.089830  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.589399  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:34.606029  357912 api_server.go:72] duration metric: took 1.517086306s to wait for apiserver process to appear ...
	I1205 21:41:34.606071  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:41:34.606100  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:31.465575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:31.466129  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:31.466149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:31.466051  359172 retry.go:31] will retry after 1.375463745s: waiting for machine to come up
	I1205 21:41:32.843149  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:32.843640  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:32.843672  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:32.843577  359172 retry.go:31] will retry after 1.414652744s: waiting for machine to come up
	I1205 21:41:34.259549  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:34.260076  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:34.260106  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:34.260026  359172 retry.go:31] will retry after 2.845213342s: waiting for machine to come up
	I1205 21:41:35.408016  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:35.408069  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:37.262251  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:41:37.262290  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:41:37.262311  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.319344  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.319389  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[-]poststarthook/start-apiextensions-controllers failed: reason withheld
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/priority-and-fairness-config-producer failed: reason withheld
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:37.606930  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:37.611927  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:37.611962  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.106614  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.111641  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:41:38.111677  357912 api_server.go:103] status: https://192.168.39.106:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:41:38.606218  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:41:38.613131  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:41:38.628002  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:41:38.628040  357912 api_server.go:131] duration metric: took 4.021961685s to wait for apiserver health ...
	I1205 21:41:38.628050  357912 cni.go:84] Creating CNI manager for ""
	I1205 21:41:38.628057  357912 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:38.630126  357912 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:41:38.631655  357912 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:41:38.645320  357912 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:41:38.668869  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:41:38.680453  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:41:38.680493  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:41:38.680501  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:41:38.680509  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:41:38.680516  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:41:38.680521  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:41:38.680526  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:41:38.680537  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:41:38.680541  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:41:38.680549  357912 system_pods.go:74] duration metric: took 11.655012ms to wait for pod list to return data ...
	I1205 21:41:38.680557  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:41:38.685260  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:41:38.685290  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:41:38.685302  357912 node_conditions.go:105] duration metric: took 4.740612ms to run NodePressure ...
	I1205 21:41:38.685335  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:38.997715  357912 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003388  357912 kubeadm.go:739] kubelet initialised
	I1205 21:41:39.003422  357912 kubeadm.go:740] duration metric: took 5.675839ms waiting for restarted kubelet to initialise ...
	I1205 21:41:39.003435  357912 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:39.008779  357912 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.015438  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015469  357912 pod_ready.go:82] duration metric: took 6.659336ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.015480  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.015487  357912 pod_ready.go:79] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.022944  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.022979  357912 pod_ready.go:82] duration metric: took 7.480121ms for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.022992  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.023000  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.030021  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030060  357912 pod_ready.go:82] duration metric: took 7.051363ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.030077  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.030087  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.074051  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074103  357912 pod_ready.go:82] duration metric: took 44.006019ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.074130  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.074142  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.472623  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472654  357912 pod_ready.go:82] duration metric: took 398.499259ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.472665  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-proxy-b4ws4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.472673  357912 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:39.873821  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873863  357912 pod_ready.go:82] duration metric: took 401.179066ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:39.873887  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:39.873914  357912 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:40.272289  357912 pod_ready.go:98] node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272322  357912 pod_ready.go:82] duration metric: took 398.392874ms for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:41:40.272338  357912 pod_ready.go:67] WaitExtra: waitPodCondition: node "default-k8s-diff-port-751353" hosting pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:40.272349  357912 pod_ready.go:39] duration metric: took 1.268896186s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:40.272381  357912 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:41:40.284524  357912 ops.go:34] apiserver oom_adj: -16
	I1205 21:41:40.284549  357912 kubeadm.go:597] duration metric: took 9.052545962s to restartPrimaryControlPlane
	I1205 21:41:40.284576  357912 kubeadm.go:394] duration metric: took 9.09933298s to StartCluster
	I1205 21:41:40.284597  357912 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.284680  357912 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:40.286372  357912 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:40.286676  357912 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.39.106 Port:8444 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:41:40.286766  357912 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:41:40.286905  357912 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286928  357912 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-751353"
	I1205 21:41:40.286933  357912 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.286985  357912 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-751353"
	I1205 21:41:40.286986  357912 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-751353"
	I1205 21:41:40.287022  357912 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.286939  357912 addons.go:243] addon storage-provisioner should already be in state true
	W1205 21:41:40.287039  357912 addons.go:243] addon metrics-server should already be in state true
	I1205 21:41:40.287110  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.286937  357912 config.go:182] Loaded profile config "default-k8s-diff-port-751353": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:41:40.287215  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.287507  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287571  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287640  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287577  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.287688  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.287824  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.288418  357912 out.go:177] * Verifying Kubernetes components...
	I1205 21:41:40.289707  357912 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:40.304423  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45233
	I1205 21:41:40.304453  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34035
	I1205 21:41:40.304433  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38023
	I1205 21:41:40.304933  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305518  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.305712  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.305741  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306151  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.306169  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.306548  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.306829  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.307143  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.307153  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.307800  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.307824  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.308518  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.308565  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.308987  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.309564  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.309596  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.311352  357912 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-751353"
	W1205 21:41:40.311374  357912 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:41:40.311408  357912 host.go:66] Checking if "default-k8s-diff-port-751353" exists ...
	I1205 21:41:40.311880  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.311929  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.325059  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36109
	I1205 21:41:40.325663  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.326356  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.326400  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.326752  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.326942  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.327767  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I1205 21:41:40.328173  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.328657  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.328678  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.328768  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.328984  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.329370  357912 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:40.329409  357912 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:40.329811  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40585
	I1205 21:41:40.330230  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.330631  357912 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:41:40.330708  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.330726  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.331052  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.331216  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.332202  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:41:40.332226  357912 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:41:40.332260  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.333642  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.335436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.335614  357912 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:37.107579  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:37.108121  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:37.108153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:37.108064  359172 retry.go:31] will retry after 2.969209087s: waiting for machine to come up
	I1205 21:41:40.079008  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:40.079546  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | unable to find current IP address of domain old-k8s-version-601806 in network mk-old-k8s-version-601806
	I1205 21:41:40.079631  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | I1205 21:41:40.079495  359172 retry.go:31] will retry after 4.062877726s: waiting for machine to come up
	I1205 21:41:40.335902  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.335936  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.336055  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.336244  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.336387  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.336516  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.337155  357912 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.337173  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:41:40.337195  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.339861  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340258  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.340291  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.340556  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.340737  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.340888  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.341009  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.353260  357912 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42177
	I1205 21:41:40.353780  357912 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:40.354465  357912 main.go:141] libmachine: Using API Version  1
	I1205 21:41:40.354495  357912 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:40.354914  357912 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:40.355181  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetState
	I1205 21:41:40.357128  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .DriverName
	I1205 21:41:40.357445  357912 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.357466  357912 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:41:40.357487  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHHostname
	I1205 21:41:40.360926  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361410  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9a:bc:70", ip: ""} in network mk-default-k8s-diff-port-751353: {Iface:virbr1 ExpiryTime:2024-12-05 22:41:16 +0000 UTC Type:0 Mac:52:54:00:9a:bc:70 Iaid: IPaddr:192.168.39.106 Prefix:24 Hostname:default-k8s-diff-port-751353 Clientid:01:52:54:00:9a:bc:70}
	I1205 21:41:40.361436  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | domain default-k8s-diff-port-751353 has defined IP address 192.168.39.106 and MAC address 52:54:00:9a:bc:70 in network mk-default-k8s-diff-port-751353
	I1205 21:41:40.361753  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHPort
	I1205 21:41:40.361968  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHKeyPath
	I1205 21:41:40.362143  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .GetSSHUsername
	I1205 21:41:40.362304  357912 sshutil.go:53] new ssh client: &{IP:192.168.39.106 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/default-k8s-diff-port-751353/id_rsa Username:docker}
	I1205 21:41:40.489718  357912 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:40.506486  357912 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:40.575280  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:41:40.594938  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:41:40.709917  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:41:40.709953  357912 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:41:40.766042  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:41:40.766076  357912 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:41:40.841338  357912 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:40.841371  357912 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:41:40.890122  357912 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:41:41.864084  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.269106426s)
	I1205 21:41:41.864153  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864168  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864080  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.288748728s)
	I1205 21:41:41.864273  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864294  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864544  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864563  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864592  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864614  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.864614  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.864623  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864641  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864682  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.864714  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.864909  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.864929  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.865021  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) DBG | Closing plugin on server side
	I1205 21:41:41.865050  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.865073  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.873134  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.873158  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.873488  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.873517  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896304  357912 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.006129117s)
	I1205 21:41:41.896383  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896401  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.896726  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.896749  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.896760  357912 main.go:141] libmachine: Making call to close driver server
	I1205 21:41:41.896770  357912 main.go:141] libmachine: (default-k8s-diff-port-751353) Calling .Close
	I1205 21:41:41.897064  357912 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:41:41.897084  357912 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:41:41.897097  357912 addons.go:475] Verifying addon metrics-server=true in "default-k8s-diff-port-751353"
	I1205 21:41:41.899809  357912 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:41:40.409151  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:40.409197  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:41.901166  357912 addons.go:510] duration metric: took 1.61441521s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:41:42.512064  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:45.011050  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:44.147162  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.147843  358357 main.go:141] libmachine: (old-k8s-version-601806) Found IP for machine: 192.168.61.123
	I1205 21:41:44.147874  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserving static IP address...
	I1205 21:41:44.147892  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has current primary IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.148399  358357 main.go:141] libmachine: (old-k8s-version-601806) Reserved static IP address: 192.168.61.123
	I1205 21:41:44.148443  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.148458  358357 main.go:141] libmachine: (old-k8s-version-601806) Waiting for SSH to be available...
	I1205 21:41:44.148487  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | skip adding static IP to network mk-old-k8s-version-601806 - found existing host DHCP lease matching {name: "old-k8s-version-601806", mac: "52:54:00:11:1e:c8", ip: "192.168.61.123"}
	I1205 21:41:44.148519  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Getting to WaitForSSH function...
	I1205 21:41:44.151017  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151372  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.151406  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.151544  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH client type: external
	I1205 21:41:44.151575  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa (-rw-------)
	I1205 21:41:44.151611  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.123 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:41:44.151629  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | About to run SSH command:
	I1205 21:41:44.151656  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | exit 0
	I1205 21:41:44.282019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | SSH cmd err, output: <nil>: 
	I1205 21:41:44.282419  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetConfigRaw
	I1205 21:41:44.283146  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.285924  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286335  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.286365  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.286633  358357 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/config.json ...
	I1205 21:41:44.286844  358357 machine.go:93] provisionDockerMachine start ...
	I1205 21:41:44.286865  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:44.287119  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.289692  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290060  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.290090  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.290192  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.290392  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290567  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.290726  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.290904  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.291168  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.291183  358357 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:41:44.410444  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:41:44.410483  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410769  358357 buildroot.go:166] provisioning hostname "old-k8s-version-601806"
	I1205 21:41:44.410800  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.410975  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.414019  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414402  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.414437  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.414618  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.414822  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415001  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.415139  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.415384  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.415620  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.415639  358357 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-601806 && echo "old-k8s-version-601806" | sudo tee /etc/hostname
	I1205 21:41:44.544783  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-601806
	
	I1205 21:41:44.544820  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.547980  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548253  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.548284  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.548548  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.548806  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.549199  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.549363  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:44.549596  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:44.549625  358357 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-601806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-601806/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-601806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:41:44.675051  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:41:44.675089  358357 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:41:44.675133  358357 buildroot.go:174] setting up certificates
	I1205 21:41:44.675147  358357 provision.go:84] configureAuth start
	I1205 21:41:44.675161  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetMachineName
	I1205 21:41:44.675484  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:44.678325  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678651  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.678670  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.678845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.681024  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681380  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.681419  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.681555  358357 provision.go:143] copyHostCerts
	I1205 21:41:44.681614  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:41:44.681635  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:41:44.681692  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:41:44.681807  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:41:44.681818  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:41:44.681840  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:41:44.681895  358357 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:41:44.681923  358357 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:41:44.681950  358357 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:41:44.682008  358357 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-601806 san=[127.0.0.1 192.168.61.123 localhost minikube old-k8s-version-601806]
	I1205 21:41:44.920345  358357 provision.go:177] copyRemoteCerts
	I1205 21:41:44.920412  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:41:44.920445  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:44.923237  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923573  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:44.923617  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:44.923858  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:44.924082  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:44.924266  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:44.924408  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.013123  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:41:45.037220  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1205 21:41:45.061460  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1205 21:41:45.086412  358357 provision.go:87] duration metric: took 411.247612ms to configureAuth
	I1205 21:41:45.086449  358357 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:41:45.086670  358357 config.go:182] Loaded profile config "old-k8s-version-601806": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I1205 21:41:45.086772  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.089593  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090011  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.090044  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.090279  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.090515  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090695  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.090845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.091119  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.091338  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.091355  358357 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:41:45.320779  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:41:45.320809  358357 machine.go:96] duration metric: took 1.033951427s to provisionDockerMachine
	I1205 21:41:45.320822  358357 start.go:293] postStartSetup for "old-k8s-version-601806" (driver="kvm2")
	I1205 21:41:45.320833  358357 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:41:45.320864  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.321259  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:41:45.321295  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.324521  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.324898  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.324926  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.325061  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.325278  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.325449  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.325608  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.413576  358357 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:41:45.418099  358357 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:41:45.418129  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:41:45.418192  358357 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:41:45.418313  358357 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:41:45.418436  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:41:45.428537  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:45.453505  358357 start.go:296] duration metric: took 132.665138ms for postStartSetup
	I1205 21:41:45.453578  358357 fix.go:56] duration metric: took 20.301569608s for fixHost
	I1205 21:41:45.453610  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.456671  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457095  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.457119  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.457317  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.457534  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457723  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.457851  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.458100  358357 main.go:141] libmachine: Using SSH client type: native
	I1205 21:41:45.458291  358357 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.61.123 22 <nil> <nil>}
	I1205 21:41:45.458303  358357 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:41:45.574874  357296 start.go:364] duration metric: took 55.701965725s to acquireMachinesLock for "embed-certs-425614"
	I1205 21:41:45.574934  357296 start.go:96] Skipping create...Using existing machine configuration
	I1205 21:41:45.574944  357296 fix.go:54] fixHost starting: 
	I1205 21:41:45.575470  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:41:45.575532  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:41:45.593184  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39281
	I1205 21:41:45.593628  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:41:45.594222  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:41:45.594249  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:41:45.594599  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:41:45.594797  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:41:45.594945  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:41:45.596532  357296 fix.go:112] recreateIfNeeded on embed-certs-425614: state=Stopped err=<nil>
	I1205 21:41:45.596560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	W1205 21:41:45.596698  357296 fix.go:138] unexpected machine state, will restart: <nil>
	I1205 21:41:45.598630  357296 out.go:177] * Restarting existing kvm2 VM for "embed-certs-425614" ...
	I1205 21:41:45.574677  358357 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434905.556875765
	
	I1205 21:41:45.574707  358357 fix.go:216] guest clock: 1733434905.556875765
	I1205 21:41:45.574720  358357 fix.go:229] Guest: 2024-12-05 21:41:45.556875765 +0000 UTC Remote: 2024-12-05 21:41:45.453584649 +0000 UTC m=+209.931227837 (delta=103.291116ms)
	I1205 21:41:45.574744  358357 fix.go:200] guest clock delta is within tolerance: 103.291116ms
	I1205 21:41:45.574749  358357 start.go:83] releasing machines lock for "old-k8s-version-601806", held for 20.422787607s
	I1205 21:41:45.574777  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.575102  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:45.578097  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578534  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.578565  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.578786  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579457  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579662  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .DriverName
	I1205 21:41:45.579786  358357 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:41:45.579845  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.579919  358357 ssh_runner.go:195] Run: cat /version.json
	I1205 21:41:45.579944  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHHostname
	I1205 21:41:45.582811  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.582951  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583117  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583153  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583388  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:45.583409  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:45.583436  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583601  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHPort
	I1205 21:41:45.583609  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583801  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.583868  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHKeyPath
	I1205 21:41:45.583990  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.584026  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetSSHUsername
	I1205 21:41:45.584185  358357 sshutil.go:53] new ssh client: &{IP:192.168.61.123 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/old-k8s-version-601806/id_rsa Username:docker}
	I1205 21:41:45.667101  358357 ssh_runner.go:195] Run: systemctl --version
	I1205 21:41:45.694059  358357 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:41:45.843409  358357 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:41:45.849628  358357 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:41:45.849714  358357 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:41:45.867490  358357 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:41:45.867526  358357 start.go:495] detecting cgroup driver to use...
	I1205 21:41:45.867613  358357 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:41:45.887817  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:41:45.902760  358357 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:41:45.902837  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:41:45.921492  358357 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:41:45.938236  358357 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:41:46.094034  358357 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:41:46.313078  358357 docker.go:233] disabling docker service ...
	I1205 21:41:46.313159  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:41:46.330094  358357 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:41:46.348887  358357 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:41:46.539033  358357 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:41:46.664752  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:41:46.681892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:41:46.703802  358357 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I1205 21:41:46.703907  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.716808  358357 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:41:46.716869  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.728088  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.739606  358357 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:41:46.750998  358357 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:41:46.763097  358357 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:41:46.773657  358357 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:41:46.773720  358357 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:41:46.787789  358357 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:41:46.799018  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:46.920247  358357 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:41:47.024151  358357 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:41:47.024236  358357 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:41:47.029240  358357 start.go:563] Will wait 60s for crictl version
	I1205 21:41:47.029326  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:47.033665  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:41:47.072480  358357 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:41:47.072588  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.110829  358357 ssh_runner.go:195] Run: crio --version
	I1205 21:41:47.141698  358357 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I1205 21:41:45.600135  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Start
	I1205 21:41:45.600390  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring networks are active...
	I1205 21:41:45.601186  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network default is active
	I1205 21:41:45.601636  357296 main.go:141] libmachine: (embed-certs-425614) Ensuring network mk-embed-certs-425614 is active
	I1205 21:41:45.602188  357296 main.go:141] libmachine: (embed-certs-425614) Getting domain xml...
	I1205 21:41:45.603057  357296 main.go:141] libmachine: (embed-certs-425614) Creating domain...
	I1205 21:41:47.045240  357296 main.go:141] libmachine: (embed-certs-425614) Waiting to get IP...
	I1205 21:41:47.046477  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.047047  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.047150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.047040  359359 retry.go:31] will retry after 219.743522ms: waiting for machine to come up
	I1205 21:41:47.268762  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.269407  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.269442  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.269336  359359 retry.go:31] will retry after 242.318322ms: waiting for machine to come up
	I1205 21:41:45.410351  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:45.410420  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.616395  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": read tcp 192.168.50.1:48034->192.168.50.141:8443: read: connection reset by peer
	I1205 21:41:45.906800  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:45.907594  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": dial tcp 192.168.50.141:8443: connect: connection refused
	I1205 21:41:46.407096  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:47.011671  357912 node_ready.go:53] node "default-k8s-diff-port-751353" has status "Ready":"False"
	I1205 21:41:48.011005  357912 node_ready.go:49] node "default-k8s-diff-port-751353" has status "Ready":"True"
	I1205 21:41:48.011040  357912 node_ready.go:38] duration metric: took 7.504506203s for node "default-k8s-diff-port-751353" to be "Ready" ...
	I1205 21:41:48.011060  357912 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:41:48.021950  357912 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038141  357912 pod_ready.go:93] pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:48.038176  357912 pod_ready.go:82] duration metric: took 16.187757ms for pod "coredns-7c65d6cfc9-mll8z" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:48.038191  357912 pod_ready.go:79] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:50.046001  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:47.143015  358357 main.go:141] libmachine: (old-k8s-version-601806) Calling .GetIP
	I1205 21:41:47.146059  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146503  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:11:1e:c8", ip: ""} in network mk-old-k8s-version-601806: {Iface:virbr3 ExpiryTime:2024-12-05 22:41:36 +0000 UTC Type:0 Mac:52:54:00:11:1e:c8 Iaid: IPaddr:192.168.61.123 Prefix:24 Hostname:old-k8s-version-601806 Clientid:01:52:54:00:11:1e:c8}
	I1205 21:41:47.146536  358357 main.go:141] libmachine: (old-k8s-version-601806) DBG | domain old-k8s-version-601806 has defined IP address 192.168.61.123 and MAC address 52:54:00:11:1e:c8 in network mk-old-k8s-version-601806
	I1205 21:41:47.146811  358357 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1205 21:41:47.151654  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:47.164839  358357 kubeadm.go:883] updating cluster {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:41:47.165019  358357 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 21:41:47.165090  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:47.213546  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:47.213640  358357 ssh_runner.go:195] Run: which lz4
	I1205 21:41:47.219695  358357 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:41:47.224752  358357 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:41:47.224801  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I1205 21:41:48.787144  358357 crio.go:462] duration metric: took 1.567500675s to copy over tarball
	I1205 21:41:48.787253  358357 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:41:47.514192  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.514819  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.514860  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.514767  359359 retry.go:31] will retry after 467.274164ms: waiting for machine to come up
	I1205 21:41:47.983367  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:47.983985  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:47.984015  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:47.983919  359359 retry.go:31] will retry after 577.298405ms: waiting for machine to come up
	I1205 21:41:48.562668  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:48.563230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:48.563278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:48.563175  359359 retry.go:31] will retry after 707.838313ms: waiting for machine to come up
	I1205 21:41:49.273409  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:49.273943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:49.273977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:49.273863  359359 retry.go:31] will retry after 908.711328ms: waiting for machine to come up
	I1205 21:41:50.183875  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:50.184278  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:50.184310  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:50.184225  359359 retry.go:31] will retry after 941.803441ms: waiting for machine to come up
	I1205 21:41:51.127915  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:51.128486  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:51.128549  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:51.128467  359359 retry.go:31] will retry after 1.289932898s: waiting for machine to come up
	I1205 21:41:51.407970  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:51.408037  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:52.046717  357912 pod_ready.go:103] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:54.367409  357912 pod_ready.go:93] pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.367441  357912 pod_ready.go:82] duration metric: took 6.32924141s for pod "etcd-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.367457  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373495  357912 pod_ready.go:93] pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.373546  357912 pod_ready.go:82] duration metric: took 6.066723ms for pod "kube-apiserver-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.373565  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.380982  357912 pod_ready.go:93] pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.381010  357912 pod_ready.go:82] duration metric: took 7.434049ms for pod "kube-controller-manager-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.381024  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387297  357912 pod_ready.go:93] pod "kube-proxy-b4ws4" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.387321  357912 pod_ready.go:82] duration metric: took 6.290388ms for pod "kube-proxy-b4ws4" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.387331  357912 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392902  357912 pod_ready.go:93] pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace has status "Ready":"True"
	I1205 21:41:54.392931  357912 pod_ready.go:82] duration metric: took 5.593155ms for pod "kube-scheduler-default-k8s-diff-port-751353" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:54.392942  357912 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	I1205 21:41:51.832182  358357 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.044870872s)
	I1205 21:41:51.832229  358357 crio.go:469] duration metric: took 3.045045829s to extract the tarball
	I1205 21:41:51.832241  358357 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:41:51.876863  358357 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:41:51.916280  358357 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I1205 21:41:51.916312  358357 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1205 21:41:51.916448  358357 image.go:135] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.916490  358357 image.go:135] retrieving image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.916520  358357 image.go:135] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.916416  358357 image.go:135] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.916539  358357 image.go:135] retrieving image: registry.k8s.io/pause:3.2
	I1205 21:41:51.916422  358357 image.go:135] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.916534  358357 image.go:135] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:51.916415  358357 image.go:135] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918641  358357 image.go:178] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:51.918657  358357 image.go:178] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:51.918673  358357 image.go:178] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:51.918675  358357 image.go:178] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:51.918699  358357 image.go:178] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I1205 21:41:51.918648  358357 image.go:178] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:51.918649  358357 image.go:178] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.084598  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.085487  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.085575  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.089387  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.097316  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.097466  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.143119  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I1205 21:41:52.188847  358357 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I1205 21:41:52.188903  358357 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.188964  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.249950  358357 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I1205 21:41:52.249988  358357 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I1205 21:41:52.250006  358357 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.250026  358357 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.250065  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250070  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.250110  358357 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I1205 21:41:52.250142  358357 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.250181  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264329  358357 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I1205 21:41:52.264458  358357 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.264384  358357 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I1205 21:41:52.264539  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.264575  358357 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.264634  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276286  358357 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I1205 21:41:52.276339  358357 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I1205 21:41:52.276369  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.276378  358357 ssh_runner.go:195] Run: which crictl
	I1205 21:41:52.276383  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.276499  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.276544  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.277043  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.277127  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.383827  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.385512  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.385513  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.404747  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.413164  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.413203  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.413257  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.502227  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I1205 21:41:52.551456  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.551634  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I1205 21:41:52.551659  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I1205 21:41:52.596670  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I1205 21:41:52.596746  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I1205 21:41:52.596677  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I1205 21:41:52.649281  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I1205 21:41:52.726027  358357 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I1205 21:41:52.726093  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I1205 21:41:52.726149  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I1205 21:41:52.726173  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I1205 21:41:52.726266  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I1205 21:41:52.726300  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I1205 21:41:52.759125  358357 cache_images.go:289] Loading image from: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I1205 21:41:52.856925  358357 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:41:53.004246  358357 cache_images.go:92] duration metric: took 1.087915709s to LoadCachedImages
	W1205 21:41:53.004349  358357 out.go:270] X Unable to load cached images: LoadCachedImages: stat /home/jenkins/minikube-integration/20053-293485/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I1205 21:41:53.004364  358357 kubeadm.go:934] updating node { 192.168.61.123 8443 v1.20.0 crio true true} ...
	I1205 21:41:53.004516  358357 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-601806 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.123
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:41:53.004596  358357 ssh_runner.go:195] Run: crio config
	I1205 21:41:53.053135  358357 cni.go:84] Creating CNI manager for ""
	I1205 21:41:53.053159  358357 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:41:53.053174  358357 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:41:53.053208  358357 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.123 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-601806 NodeName:old-k8s-version-601806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.123"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.123 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I1205 21:41:53.053385  358357 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.123
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-601806"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.123
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.123"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:41:53.053465  358357 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I1205 21:41:53.064225  358357 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:41:53.064320  358357 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:41:53.074565  358357 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I1205 21:41:53.091812  358357 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:41:53.111455  358357 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I1205 21:41:53.131057  358357 ssh_runner.go:195] Run: grep 192.168.61.123	control-plane.minikube.internal$ /etc/hosts
	I1205 21:41:53.135026  358357 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.123	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:41:53.148476  358357 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:41:53.289114  358357 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:41:53.309855  358357 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806 for IP: 192.168.61.123
	I1205 21:41:53.309886  358357 certs.go:194] generating shared ca certs ...
	I1205 21:41:53.309923  358357 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.310122  358357 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:41:53.310176  358357 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:41:53.310202  358357 certs.go:256] generating profile certs ...
	I1205 21:41:53.310390  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/client.key
	I1205 21:41:53.310485  358357 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key.a6e43dea
	I1205 21:41:53.310568  358357 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key
	I1205 21:41:53.310814  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:41:53.310866  358357 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:41:53.310880  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:41:53.310912  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:41:53.310960  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:41:53.311000  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:41:53.311072  358357 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:41:53.312161  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:41:53.353059  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:41:53.386512  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:41:53.423583  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:41:53.463250  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1205 21:41:53.494884  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1205 21:41:53.529876  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:41:53.579695  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/old-k8s-version-601806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:41:53.606144  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:41:53.631256  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:41:53.656184  358357 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:41:53.680842  358357 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:41:53.700705  358357 ssh_runner.go:195] Run: openssl version
	I1205 21:41:53.707800  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:41:53.719776  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724558  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.724630  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:41:53.731088  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:41:53.742620  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:41:53.754961  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759594  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.759669  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:41:53.765536  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:41:53.776756  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:41:53.789117  358357 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793629  358357 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.793707  358357 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:41:53.799394  358357 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:41:53.810660  358357 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:41:53.815344  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:41:53.821418  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:41:53.827800  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:41:53.834376  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:41:53.840645  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:41:53.847470  358357 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:41:53.854401  358357 kubeadm.go:392] StartCluster: {Name:old-k8s-version-601806 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-601806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.123 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:41:53.854504  358357 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:41:53.854569  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:53.893993  358357 cri.go:89] found id: ""
	I1205 21:41:53.894081  358357 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:41:53.904808  358357 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:41:53.904829  358357 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:41:53.904876  358357 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:41:53.915573  358357 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:41:53.916624  358357 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-601806" does not appear in /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:41:53.917310  358357 kubeconfig.go:62] /home/jenkins/minikube-integration/20053-293485/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-601806" cluster setting kubeconfig missing "old-k8s-version-601806" context setting]
	I1205 21:41:53.918211  358357 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:41:53.978448  358357 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:41:53.989629  358357 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.61.123
	I1205 21:41:53.989674  358357 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:41:53.989707  358357 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:41:53.989791  358357 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:41:54.027722  358357 cri.go:89] found id: ""
	I1205 21:41:54.027816  358357 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:41:54.045095  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:41:54.058119  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:41:54.058145  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:41:54.058211  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:41:54.070466  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:41:54.070563  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:41:54.081555  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:41:54.093332  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:41:54.093415  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:41:54.103877  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.114047  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:41:54.114117  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:41:54.126566  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:41:54.138673  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:41:54.138767  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:41:54.149449  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:41:54.162818  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.294483  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:54.983905  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.218496  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.340478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:41:55.440382  358357 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:41:55.440495  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:52.419705  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:52.420193  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:52.420230  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:52.420115  359359 retry.go:31] will retry after 1.684643705s: waiting for machine to come up
	I1205 21:41:54.106187  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:54.106714  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:54.106754  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:54.106660  359359 retry.go:31] will retry after 1.531754159s: waiting for machine to come up
	I1205 21:41:55.639991  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:55.640467  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:55.640503  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:55.640401  359359 retry.go:31] will retry after 2.722460669s: waiting for machine to come up
	I1205 21:41:56.409347  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:41:56.409397  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:41:56.399969  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:58.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:41:55.941513  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.440634  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:56.941451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.440602  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:57.940778  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.441396  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.941148  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.441320  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:59.941573  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:00.441005  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:41:58.366356  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:41:58.366849  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:41:58.366874  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:41:58.366805  359359 retry.go:31] will retry after 2.312099452s: waiting for machine to come up
	I1205 21:42:00.680417  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:00.680953  357296 main.go:141] libmachine: (embed-certs-425614) DBG | unable to find current IP address of domain embed-certs-425614 in network mk-embed-certs-425614
	I1205 21:42:00.680977  357296 main.go:141] libmachine: (embed-certs-425614) DBG | I1205 21:42:00.680904  359359 retry.go:31] will retry after 3.145457312s: waiting for machine to come up
	I1205 21:42:01.410313  357831 api_server.go:269] stopped: https://192.168.50.141:8443/healthz: Get "https://192.168.50.141:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1205 21:42:01.410382  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.204308  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.204353  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.204374  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.246513  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:03.246569  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:03.406787  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.411529  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.411571  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:03.907108  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:03.911621  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:03.911669  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.407111  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.416185  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:04.416225  357831 api_server.go:103] status: https://192.168.50.141:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:04.906151  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:42:04.913432  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:42:04.923422  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:04.923466  357831 api_server.go:131] duration metric: took 40.017479306s to wait for apiserver health ...
	I1205 21:42:04.923479  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:42:04.923488  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:04.925861  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:01.399834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:03.399888  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:00.941505  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.441014  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:01.940938  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.440702  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:02.940749  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.441519  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.941098  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.440754  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:04.941260  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:05.441179  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:03.830452  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.830997  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has current primary IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.831031  357296 main.go:141] libmachine: (embed-certs-425614) Found IP for machine: 192.168.72.8
	I1205 21:42:03.831046  357296 main.go:141] libmachine: (embed-certs-425614) Reserving static IP address...
	I1205 21:42:03.831505  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.831534  357296 main.go:141] libmachine: (embed-certs-425614) Reserved static IP address: 192.168.72.8
	I1205 21:42:03.831552  357296 main.go:141] libmachine: (embed-certs-425614) DBG | skip adding static IP to network mk-embed-certs-425614 - found existing host DHCP lease matching {name: "embed-certs-425614", mac: "52:54:00:d8:bb:db", ip: "192.168.72.8"}
	I1205 21:42:03.831566  357296 main.go:141] libmachine: (embed-certs-425614) Waiting for SSH to be available...
	I1205 21:42:03.831574  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Getting to WaitForSSH function...
	I1205 21:42:03.833969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834352  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.834388  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.834532  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH client type: external
	I1205 21:42:03.834550  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Using SSH private key: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa (-rw-------)
	I1205 21:42:03.834569  357296 main.go:141] libmachine: (embed-certs-425614) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.8 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa -p 22] /usr/bin/ssh <nil>}
	I1205 21:42:03.834587  357296 main.go:141] libmachine: (embed-certs-425614) DBG | About to run SSH command:
	I1205 21:42:03.834598  357296 main.go:141] libmachine: (embed-certs-425614) DBG | exit 0
	I1205 21:42:03.962943  357296 main.go:141] libmachine: (embed-certs-425614) DBG | SSH cmd err, output: <nil>: 
	I1205 21:42:03.963457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetConfigRaw
	I1205 21:42:03.964327  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:03.967583  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968035  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.968069  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.968471  357296 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/config.json ...
	I1205 21:42:03.968788  357296 machine.go:93] provisionDockerMachine start ...
	I1205 21:42:03.968820  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:03.969139  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:03.972165  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972515  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:03.972545  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:03.972636  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:03.972845  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973079  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:03.973321  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:03.973541  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:03.973743  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:03.973756  357296 main.go:141] libmachine: About to run SSH command:
	hostname
	I1205 21:42:04.086658  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1205 21:42:04.086701  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087004  357296 buildroot.go:166] provisioning hostname "embed-certs-425614"
	I1205 21:42:04.087040  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.087297  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.090622  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091119  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.091157  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.091374  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.091647  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.091854  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.092065  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.092302  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.092559  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.092590  357296 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-425614 && echo "embed-certs-425614" | sudo tee /etc/hostname
	I1205 21:42:04.222630  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-425614
	
	I1205 21:42:04.222668  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.225969  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226469  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.226507  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.226742  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.226966  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227230  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.227436  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.227672  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.227862  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.227878  357296 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-425614' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-425614/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-425614' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1205 21:42:04.351706  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1205 21:42:04.351775  357296 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/20053-293485/.minikube CaCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20053-293485/.minikube}
	I1205 21:42:04.351853  357296 buildroot.go:174] setting up certificates
	I1205 21:42:04.351869  357296 provision.go:84] configureAuth start
	I1205 21:42:04.351894  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetMachineName
	I1205 21:42:04.352249  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:04.355753  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356188  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.356232  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.356460  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.359365  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.359864  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.359911  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.360105  357296 provision.go:143] copyHostCerts
	I1205 21:42:04.360181  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem, removing ...
	I1205 21:42:04.360209  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem
	I1205 21:42:04.360287  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/ca.pem (1082 bytes)
	I1205 21:42:04.360424  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem, removing ...
	I1205 21:42:04.360437  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem
	I1205 21:42:04.360470  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/cert.pem (1123 bytes)
	I1205 21:42:04.360554  357296 exec_runner.go:144] found /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem, removing ...
	I1205 21:42:04.360564  357296 exec_runner.go:203] rm: /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem
	I1205 21:42:04.360592  357296 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20053-293485/.minikube/key.pem (1675 bytes)
	I1205 21:42:04.360668  357296 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem org=jenkins.embed-certs-425614 san=[127.0.0.1 192.168.72.8 embed-certs-425614 localhost minikube]
	I1205 21:42:04.632816  357296 provision.go:177] copyRemoteCerts
	I1205 21:42:04.632901  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1205 21:42:04.632942  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.636150  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.636654  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.636828  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.637044  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.637271  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.637464  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:04.724883  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1205 21:42:04.754994  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1205 21:42:04.783996  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1205 21:42:04.810963  357296 provision.go:87] duration metric: took 459.073427ms to configureAuth
	I1205 21:42:04.811003  357296 buildroot.go:189] setting minikube options for container-runtime
	I1205 21:42:04.811279  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:42:04.811384  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:04.814420  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.814863  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:04.814996  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:04.815102  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:04.815346  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815586  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:04.815767  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:04.815972  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:04.816238  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:04.816287  357296 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1205 21:42:05.064456  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1205 21:42:05.064490  357296 machine.go:96] duration metric: took 1.095680989s to provisionDockerMachine
	I1205 21:42:05.064509  357296 start.go:293] postStartSetup for "embed-certs-425614" (driver="kvm2")
	I1205 21:42:05.064521  357296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1205 21:42:05.064560  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.064956  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1205 21:42:05.064997  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.068175  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068618  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.068657  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.068994  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.069241  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.069449  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.069602  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.157732  357296 ssh_runner.go:195] Run: cat /etc/os-release
	I1205 21:42:05.162706  357296 info.go:137] Remote host: Buildroot 2023.02.9
	I1205 21:42:05.162752  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/addons for local assets ...
	I1205 21:42:05.162845  357296 filesync.go:126] Scanning /home/jenkins/minikube-integration/20053-293485/.minikube/files for local assets ...
	I1205 21:42:05.162920  357296 filesync.go:149] local asset: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem -> 3007652.pem in /etc/ssl/certs
	I1205 21:42:05.163016  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1205 21:42:05.179784  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:05.207166  357296 start.go:296] duration metric: took 142.636794ms for postStartSetup
	I1205 21:42:05.207223  357296 fix.go:56] duration metric: took 19.632279138s for fixHost
	I1205 21:42:05.207253  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.210923  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211426  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.211463  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.211657  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.211896  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212114  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.212282  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.212467  357296 main.go:141] libmachine: Using SSH client type: native
	I1205 21:42:05.212723  357296 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x866ca0] 0x869980 <nil>  [] 0s} 192.168.72.8 22 <nil> <nil>}
	I1205 21:42:05.212739  357296 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I1205 21:42:05.327710  357296 main.go:141] libmachine: SSH cmd err, output: <nil>: 1733434925.280377877
	
	I1205 21:42:05.327737  357296 fix.go:216] guest clock: 1733434925.280377877
	I1205 21:42:05.327749  357296 fix.go:229] Guest: 2024-12-05 21:42:05.280377877 +0000 UTC Remote: 2024-12-05 21:42:05.207229035 +0000 UTC m=+357.921750384 (delta=73.148842ms)
	I1205 21:42:05.327795  357296 fix.go:200] guest clock delta is within tolerance: 73.148842ms
	I1205 21:42:05.327803  357296 start.go:83] releasing machines lock for "embed-certs-425614", held for 19.752893913s
	I1205 21:42:05.327826  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.328184  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:05.331359  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331686  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.331722  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.331953  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332650  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332870  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:42:05.332999  357296 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1205 21:42:05.333104  357296 ssh_runner.go:195] Run: cat /version.json
	I1205 21:42:05.333112  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.333137  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:42:05.336283  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336576  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336749  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.336784  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.336987  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337074  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:05.337123  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:05.337206  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337228  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:42:05.337457  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:42:05.337475  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337669  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:42:05.337668  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.337806  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:42:05.443865  357296 ssh_runner.go:195] Run: systemctl --version
	I1205 21:42:05.450866  357296 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1205 21:42:05.596799  357296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1205 21:42:05.603700  357296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1205 21:42:05.603781  357296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1205 21:42:05.619488  357296 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1205 21:42:05.619521  357296 start.go:495] detecting cgroup driver to use...
	I1205 21:42:05.619622  357296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1205 21:42:05.639018  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1205 21:42:05.655878  357296 docker.go:217] disabling cri-docker service (if available) ...
	I1205 21:42:05.655942  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1205 21:42:05.671883  357296 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1205 21:42:05.691645  357296 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1205 21:42:05.804200  357296 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1205 21:42:05.997573  357296 docker.go:233] disabling docker service ...
	I1205 21:42:05.997702  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1205 21:42:06.014153  357296 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1205 21:42:06.031828  357296 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1205 21:42:06.179266  357296 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1205 21:42:06.318806  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1205 21:42:06.332681  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1205 21:42:06.353528  357296 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1205 21:42:06.353615  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.365381  357296 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1205 21:42:06.365472  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.377020  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.389325  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.402399  357296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1205 21:42:06.414106  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.425792  357296 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.445787  357296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1205 21:42:06.457203  357296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1205 21:42:06.467275  357296 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1205 21:42:06.467356  357296 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1205 21:42:06.481056  357296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1205 21:42:06.492188  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:06.634433  357296 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1205 21:42:06.727916  357296 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1205 21:42:06.728007  357296 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1205 21:42:06.732581  357296 start.go:563] Will wait 60s for crictl version
	I1205 21:42:06.732645  357296 ssh_runner.go:195] Run: which crictl
	I1205 21:42:06.736545  357296 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1205 21:42:06.775945  357296 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I1205 21:42:06.776069  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.808556  357296 ssh_runner.go:195] Run: crio --version
	I1205 21:42:06.844968  357296 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.29.1 ...
	I1205 21:42:06.846380  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetIP
	I1205 21:42:06.849873  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850366  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:42:06.850410  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:42:06.850664  357296 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1205 21:42:06.855593  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:06.869323  357296 kubeadm.go:883] updating cluster {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1205 21:42:06.869513  357296 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 21:42:06.869598  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:06.906593  357296 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.31.2". assuming images are not preloaded.
	I1205 21:42:06.906667  357296 ssh_runner.go:195] Run: which lz4
	I1205 21:42:06.910838  357296 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1205 21:42:06.915077  357296 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1205 21:42:06.915129  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (392059347 bytes)
	I1205 21:42:04.927426  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:04.941208  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:04.968170  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:04.998847  357831 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:04.998907  357831 system_pods.go:61] "coredns-7c65d6cfc9-k89d7" [8a72b3cc-863a-4a51-8592-f090d7de58cb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:04.998920  357831 system_pods.go:61] "etcd-no-preload-500648" [cafdfe7b-d749-4f0b-9ce1-4045e0dba5e7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:04.998933  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [882b20c9-56f1-41e7-80a2-7781b05f021f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:04.998942  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [d8746bd6-a884-4497-be4a-f88b4776cc19] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:04.998952  357831 system_pods.go:61] "kube-proxy-tbcmd" [ef507fa3-fe13-47b2-909e-15a4d0544716] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1205 21:42:04.998958  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [6713250e-00ac-48db-ad2f-39b1867c00f3] Running
	I1205 21:42:04.998968  357831 system_pods.go:61] "metrics-server-6867b74b74-7xm6l" [0d8a7353-2449-4143-962e-fc837e598f56] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:04.998979  357831 system_pods.go:61] "storage-provisioner" [a0d29dee-08f6-43f8-9d02-6bda96fe0c85] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1205 21:42:04.998988  357831 system_pods.go:74] duration metric: took 30.786075ms to wait for pod list to return data ...
	I1205 21:42:04.999002  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:05.005560  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:05.005611  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:05.005630  357831 node_conditions.go:105] duration metric: took 6.621222ms to run NodePressure ...
	I1205 21:42:05.005659  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:05.417060  357831 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423873  357831 kubeadm.go:739] kubelet initialised
	I1205 21:42:05.423903  357831 kubeadm.go:740] duration metric: took 6.807257ms waiting for restarted kubelet to initialise ...
	I1205 21:42:05.423914  357831 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:05.429965  357831 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:07.440042  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.400253  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:07.401405  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:09.901336  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:05.941258  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.440780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:06.940790  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.441097  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:07.941334  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.440670  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.941230  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.441317  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:09.941664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:10.440620  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:08.325757  357296 crio.go:462] duration metric: took 1.41497545s to copy over tarball
	I1205 21:42:08.325937  357296 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1205 21:42:10.566636  357296 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.240649211s)
	I1205 21:42:10.566679  357296 crio.go:469] duration metric: took 2.240881092s to extract the tarball
	I1205 21:42:10.566690  357296 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1205 21:42:10.604048  357296 ssh_runner.go:195] Run: sudo crictl images --output json
	I1205 21:42:10.648218  357296 crio.go:514] all images are preloaded for cri-o runtime.
	I1205 21:42:10.648245  357296 cache_images.go:84] Images are preloaded, skipping loading
	I1205 21:42:10.648254  357296 kubeadm.go:934] updating node { 192.168.72.8 8443 v1.31.2 crio true true} ...
	I1205 21:42:10.648380  357296 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-425614 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.8
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1205 21:42:10.648472  357296 ssh_runner.go:195] Run: crio config
	I1205 21:42:10.694426  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:10.694457  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:10.694470  357296 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1205 21:42:10.694494  357296 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.8 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-425614 NodeName:embed-certs-425614 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.8"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.8 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1205 21:42:10.694626  357296 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.8
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-425614"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.8"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.8"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1205 21:42:10.694700  357296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1205 21:42:10.707043  357296 binaries.go:44] Found k8s binaries, skipping transfer
	I1205 21:42:10.707116  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1205 21:42:10.717088  357296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I1205 21:42:10.735095  357296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1205 21:42:10.753994  357296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2292 bytes)
	I1205 21:42:10.771832  357296 ssh_runner.go:195] Run: grep 192.168.72.8	control-plane.minikube.internal$ /etc/hosts
	I1205 21:42:10.776949  357296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.8	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1205 21:42:10.789761  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:42:10.937235  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:42:10.959030  357296 certs.go:68] Setting up /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614 for IP: 192.168.72.8
	I1205 21:42:10.959073  357296 certs.go:194] generating shared ca certs ...
	I1205 21:42:10.959107  357296 certs.go:226] acquiring lock for ca certs: {Name:mk0a64c268277465530ca73f7813790aba1a67b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:42:10.959307  357296 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key
	I1205 21:42:10.959366  357296 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key
	I1205 21:42:10.959378  357296 certs.go:256] generating profile certs ...
	I1205 21:42:10.959508  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/client.key
	I1205 21:42:10.959581  357296 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key.a8dcad40
	I1205 21:42:10.959631  357296 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key
	I1205 21:42:10.959747  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem (1338 bytes)
	W1205 21:42:10.959807  357296 certs.go:480] ignoring /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765_empty.pem, impossibly tiny 0 bytes
	I1205 21:42:10.959822  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca-key.pem (1679 bytes)
	I1205 21:42:10.959855  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/ca.pem (1082 bytes)
	I1205 21:42:10.959889  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/cert.pem (1123 bytes)
	I1205 21:42:10.959924  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/certs/key.pem (1675 bytes)
	I1205 21:42:10.959977  357296 certs.go:484] found cert: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem (1708 bytes)
	I1205 21:42:10.960886  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1205 21:42:10.999249  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1205 21:42:11.035379  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1205 21:42:11.069796  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1205 21:42:11.103144  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1205 21:42:11.144531  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1205 21:42:11.183637  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1205 21:42:11.208780  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/embed-certs-425614/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1205 21:42:11.237378  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1205 21:42:11.262182  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/certs/300765.pem --> /usr/share/ca-certificates/300765.pem (1338 bytes)
	I1205 21:42:11.287003  357296 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/ssl/certs/3007652.pem --> /usr/share/ca-certificates/3007652.pem (1708 bytes)
	I1205 21:42:11.311375  357296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1205 21:42:11.330529  357296 ssh_runner.go:195] Run: openssl version
	I1205 21:42:11.336346  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3007652.pem && ln -fs /usr/share/ca-certificates/3007652.pem /etc/ssl/certs/3007652.pem"
	I1205 21:42:11.347306  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352107  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec  5 20:30 /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.352179  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3007652.pem
	I1205 21:42:11.357939  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3007652.pem /etc/ssl/certs/3ec20f2e.0"
	I1205 21:42:11.369013  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1205 21:42:11.380244  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384671  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec  5 20:20 /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.384747  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1205 21:42:11.390330  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1205 21:42:11.402029  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/300765.pem && ln -fs /usr/share/ca-certificates/300765.pem /etc/ssl/certs/300765.pem"
	I1205 21:42:11.413047  357296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417617  357296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec  5 20:30 /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.417707  357296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/300765.pem
	I1205 21:42:11.423562  357296 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/300765.pem /etc/ssl/certs/51391683.0"
	I1205 21:42:11.434978  357296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1205 21:42:11.439887  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1205 21:42:11.446653  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1205 21:42:11.453390  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1205 21:42:11.460104  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1205 21:42:11.466281  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1205 21:42:11.472205  357296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1205 21:42:11.478395  357296 kubeadm.go:392] StartCluster: {Name:embed-certs-425614 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31
.2 ClusterName:embed-certs-425614 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 21:42:11.478534  357296 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1205 21:42:11.478604  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.519447  357296 cri.go:89] found id: ""
	I1205 21:42:11.519540  357296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1205 21:42:11.530882  357296 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I1205 21:42:11.530915  357296 kubeadm.go:593] restartPrimaryControlPlane start ...
	I1205 21:42:11.530967  357296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1205 21:42:11.541349  357296 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:42:11.542457  357296 kubeconfig.go:125] found "embed-certs-425614" server: "https://192.168.72.8:8443"
	I1205 21:42:11.544588  357296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1205 21:42:11.555107  357296 kubeadm.go:630] The running cluster does not require reconfiguration: 192.168.72.8
	I1205 21:42:11.555149  357296 kubeadm.go:1160] stopping kube-system containers ...
	I1205 21:42:11.555164  357296 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I1205 21:42:11.555214  357296 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1205 21:42:11.592787  357296 cri.go:89] found id: ""
	I1205 21:42:11.592880  357296 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1205 21:42:11.609965  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:42:11.623705  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:42:11.623730  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:42:11.623784  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:42:11.634267  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:42:11.634344  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:42:11.645579  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:42:11.655845  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:42:11.655932  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:42:11.667367  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.677450  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:42:11.677541  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:42:11.688484  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:42:11.698581  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:42:11.698665  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:42:11.709332  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:42:11.724079  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:11.850526  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:09.436733  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.436771  357831 pod_ready.go:82] duration metric: took 4.006772842s for pod "coredns-7c65d6cfc9-k89d7" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.436787  357831 pod_ready.go:79] waiting up to 4m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442948  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:09.442975  357831 pod_ready.go:82] duration metric: took 6.180027ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:09.442985  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:11.454117  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:12.400229  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:14.401251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:10.940676  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.441446  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:11.941429  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.441431  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.940947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.441378  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.940664  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.441436  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.941528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:15.441617  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:12.676884  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.049350  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.104083  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:13.151758  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:42:13.151871  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:13.653003  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.152424  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:14.241811  357296 api_server.go:72] duration metric: took 1.09005484s to wait for apiserver process to appear ...
	I1205 21:42:14.241841  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:42:14.241865  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:14.242492  357296 api_server.go:269] stopped: https://192.168.72.8:8443/healthz: Get "https://192.168.72.8:8443/healthz": dial tcp 192.168.72.8:8443: connect: connection refused
	I1205 21:42:14.742031  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.675226  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.675262  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.675277  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.689093  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1205 21:42:16.689130  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1205 21:42:16.742350  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:16.780046  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:16.780094  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:17.242752  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.248221  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.248293  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:13.807623  357831 pod_ready.go:103] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:13.955657  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:13.955696  357831 pod_ready.go:82] duration metric: took 4.512701293s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:13.955710  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:15.964035  357831 pod_ready.go:103] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:17.464364  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.464400  357831 pod_ready.go:82] duration metric: took 3.508681036s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.464416  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471083  357831 pod_ready.go:93] pod "kube-proxy-tbcmd" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.471112  357831 pod_ready.go:82] duration metric: took 6.68764ms for pod "kube-proxy-tbcmd" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.471127  357831 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477759  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:17.477792  357831 pod_ready.go:82] duration metric: took 6.655537ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.477805  357831 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:17.742750  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:17.750907  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:17.750945  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.242675  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.247883  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.247913  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:18.742494  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:18.748060  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:18.748095  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.242753  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.247456  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.247493  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:19.742029  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:19.747799  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1205 21:42:19.747830  357296 api_server.go:103] status: https://192.168.72.8:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1205 21:42:20.242351  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:42:20.248627  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:42:20.257222  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:42:20.257260  357296 api_server.go:131] duration metric: took 6.015411765s to wait for apiserver health ...
	I1205 21:42:20.257273  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:42:20.257281  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:42:20.259099  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:42:16.899464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:19.400536  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:15.940894  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.441373  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:16.940607  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.441640  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:17.941424  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.441485  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:18.941548  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.441297  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:19.940718  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.441175  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:20.260397  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:42:20.271889  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:42:20.291125  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:42:20.300276  357296 system_pods.go:59] 8 kube-system pods found
	I1205 21:42:20.300328  357296 system_pods.go:61] "coredns-7c65d6cfc9-kjcf8" [7a73d409-50b8-4e9c-a84d-bb497c6f068c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1205 21:42:20.300337  357296 system_pods.go:61] "etcd-embed-certs-425614" [39067a54-9f4e-4ce5-b48f-0d442a332902] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1205 21:42:20.300346  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [cc3f918c-a257-4135-a5dd-af78e60bbf90] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1205 21:42:20.300352  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [bbcf99e6-54f9-44f5-a484-26997a4e5941] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1205 21:42:20.300359  357296 system_pods.go:61] "kube-proxy-jflgx" [77b6325b-0db8-41de-8c7e-6111d155704d] Running
	I1205 21:42:20.300366  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [0615aea3-8e2c-4329-b89f-02c7fe9f6f7f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1205 21:42:20.300377  357296 system_pods.go:61] "metrics-server-6867b74b74-dggmv" [c53aecb9-98a5-481a-84f3-96fd18815e14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:42:20.300380  357296 system_pods.go:61] "storage-provisioner" [d43b05e9-7ab8-4326-93b4-177aeb5ba02e] Running
	I1205 21:42:20.300388  357296 system_pods.go:74] duration metric: took 9.233104ms to wait for pod list to return data ...
	I1205 21:42:20.300396  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:42:20.304455  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:42:20.304484  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:42:20.304498  357296 node_conditions.go:105] duration metric: took 4.096074ms to run NodePressure ...
	I1205 21:42:20.304519  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1205 21:42:20.571968  357296 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577704  357296 kubeadm.go:739] kubelet initialised
	I1205 21:42:20.577730  357296 kubeadm.go:740] duration metric: took 5.727858ms waiting for restarted kubelet to initialise ...
	I1205 21:42:20.577741  357296 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:42:20.583872  357296 pod_ready.go:79] waiting up to 4m0s for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.589835  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589866  357296 pod_ready.go:82] duration metric: took 5.957984ms for pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.589878  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "coredns-7c65d6cfc9-kjcf8" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.589886  357296 pod_ready.go:79] waiting up to 4m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.596004  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596038  357296 pod_ready.go:82] duration metric: took 6.144722ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.596049  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "etcd-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.596056  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.601686  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601720  357296 pod_ready.go:82] duration metric: took 5.653369ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.601734  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.601742  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:20.694482  357296 pod_ready.go:98] node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694515  357296 pod_ready.go:82] duration metric: took 92.763219ms for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	E1205 21:42:20.694524  357296 pod_ready.go:67] WaitExtra: waitPodCondition: node "embed-certs-425614" hosting pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-425614" has status "Ready":"False"
	I1205 21:42:20.694531  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094672  357296 pod_ready.go:93] pod "kube-proxy-jflgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:21.094703  357296 pod_ready.go:82] duration metric: took 400.158324ms for pod "kube-proxy-jflgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:21.094714  357296 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:19.485441  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.984845  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:21.900464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:24.399362  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:20.941042  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.440840  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:21.941291  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.441298  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:22.941140  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.441157  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.940711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.441126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:24.941194  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:25.441239  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:23.101967  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.103066  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.103106  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:23.985150  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.985406  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:26.399494  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:28.399742  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:25.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.440892  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:26.940734  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.441439  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:27.941025  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.441662  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:28.941200  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.440850  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.941090  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:30.441496  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:29.106277  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.101137  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:42:30.101170  357296 pod_ready.go:82] duration metric: took 9.00644797s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:30.101199  357296 pod_ready.go:79] waiting up to 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	I1205 21:42:32.107886  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:27.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.484153  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.484800  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.399854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:32.400508  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.901319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:30.941631  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.441522  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:31.940961  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.441547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:32.940644  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.440711  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:33.941591  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.441457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.941255  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:35.441478  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:34.108645  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.608124  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:34.984686  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:36.984823  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:37.400319  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.900110  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:35.941404  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.441453  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:36.941276  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.440624  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:37.941248  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.440773  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.940852  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.440975  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:39.940613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:40.441409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:38.608300  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.608878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:39.483667  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.483884  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:41.900531  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.900867  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:40.941065  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.440940  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:41.941340  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.441333  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:42.941444  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.440657  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.941351  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.441039  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:44.941628  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:45.440942  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:43.107571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.107803  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.108118  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:43.484581  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.485934  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:46.400053  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:48.902975  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:45.941474  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.441502  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:46.941071  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.441501  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:47.941353  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.441574  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:48.940650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.441259  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.941249  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:50.441304  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:49.608563  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.108228  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:47.992612  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.484515  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:52.484930  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:51.399905  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:53.400794  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:50.941158  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.440651  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:51.941062  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.441434  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:52.940665  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.441387  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:53.940784  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.441549  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:54.941564  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:55.441202  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:55.441294  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:55.475973  358357 cri.go:89] found id: ""
	I1205 21:42:55.476011  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.476023  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:55.476032  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:55.476106  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:55.511119  358357 cri.go:89] found id: ""
	I1205 21:42:55.511149  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.511158  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:55.511164  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:55.511238  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:55.544659  358357 cri.go:89] found id: ""
	I1205 21:42:55.544700  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.544716  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:55.544726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:55.544803  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:54.608219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.107753  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:54.986439  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:57.484521  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.900101  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:58.399595  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:55.579789  358357 cri.go:89] found id: ""
	I1205 21:42:55.579826  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.579836  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:55.579843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:55.579912  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:55.615309  358357 cri.go:89] found id: ""
	I1205 21:42:55.615348  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.615363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:55.615371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:55.615444  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:55.649520  358357 cri.go:89] found id: ""
	I1205 21:42:55.649551  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.649562  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:55.649569  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:55.649647  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:55.688086  358357 cri.go:89] found id: ""
	I1205 21:42:55.688120  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.688132  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:55.688139  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:55.688207  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:55.722901  358357 cri.go:89] found id: ""
	I1205 21:42:55.722932  358357 logs.go:282] 0 containers: []
	W1205 21:42:55.722943  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:55.722955  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:55.722968  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:55.775746  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:55.775792  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:55.790317  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:55.790370  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:55.916541  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:55.916593  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:55.916608  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:55.991284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:55.991350  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:58.534040  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:42:58.551747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:42:58.551856  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:42:58.602423  358357 cri.go:89] found id: ""
	I1205 21:42:58.602465  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.602478  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:42:58.602493  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:42:58.602570  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:42:58.658410  358357 cri.go:89] found id: ""
	I1205 21:42:58.658442  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.658454  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:42:58.658462  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:42:58.658544  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:42:58.696967  358357 cri.go:89] found id: ""
	I1205 21:42:58.697005  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.697024  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:42:58.697032  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:42:58.697092  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:42:58.740924  358357 cri.go:89] found id: ""
	I1205 21:42:58.740958  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.740969  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:42:58.740977  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:42:58.741049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:42:58.775613  358357 cri.go:89] found id: ""
	I1205 21:42:58.775656  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.775669  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:42:58.775677  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:42:58.775753  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:42:58.810565  358357 cri.go:89] found id: ""
	I1205 21:42:58.810606  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.810621  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:42:58.810630  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:42:58.810704  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:42:58.844616  358357 cri.go:89] found id: ""
	I1205 21:42:58.844649  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.844658  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:42:58.844664  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:42:58.844720  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:42:58.889234  358357 cri.go:89] found id: ""
	I1205 21:42:58.889270  358357 logs.go:282] 0 containers: []
	W1205 21:42:58.889282  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:42:58.889297  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:42:58.889313  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:42:58.964712  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:42:58.964756  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:42:59.005004  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:42:59.005036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:42:59.057585  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:42:59.057635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:42:59.072115  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:42:59.072151  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:42:59.145425  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:42:59.108534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.607610  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:42:59.485366  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.986049  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:00.400127  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:02.400257  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:04.899587  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:01.646046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:01.659425  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:01.659517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:01.695527  358357 cri.go:89] found id: ""
	I1205 21:43:01.695559  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.695568  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:01.695574  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:01.695636  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:01.731808  358357 cri.go:89] found id: ""
	I1205 21:43:01.731842  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.731854  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:01.731861  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:01.731937  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:01.765738  358357 cri.go:89] found id: ""
	I1205 21:43:01.765771  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.765789  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:01.765796  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:01.765859  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:01.801611  358357 cri.go:89] found id: ""
	I1205 21:43:01.801647  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.801657  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:01.801665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:01.801732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:01.839276  358357 cri.go:89] found id: ""
	I1205 21:43:01.839308  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.839317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:01.839323  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:01.839385  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:01.875227  358357 cri.go:89] found id: ""
	I1205 21:43:01.875266  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.875279  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:01.875288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:01.875350  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:01.913182  358357 cri.go:89] found id: ""
	I1205 21:43:01.913225  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.913238  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:01.913247  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:01.913312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:01.952638  358357 cri.go:89] found id: ""
	I1205 21:43:01.952677  358357 logs.go:282] 0 containers: []
	W1205 21:43:01.952701  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:01.952716  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:01.952734  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:01.998360  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:01.998401  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:02.049534  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:02.049588  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:02.064358  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:02.064389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:02.136029  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:02.136060  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:02.136077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:04.719271  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:04.735387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:04.735490  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:04.769540  358357 cri.go:89] found id: ""
	I1205 21:43:04.769578  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.769590  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:04.769598  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:04.769679  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:04.803402  358357 cri.go:89] found id: ""
	I1205 21:43:04.803444  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.803460  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:04.803470  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:04.803538  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:04.839694  358357 cri.go:89] found id: ""
	I1205 21:43:04.839725  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.839739  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:04.839748  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:04.839820  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:04.874952  358357 cri.go:89] found id: ""
	I1205 21:43:04.874982  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.875001  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:04.875022  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:04.875086  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:04.910338  358357 cri.go:89] found id: ""
	I1205 21:43:04.910378  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.910390  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:04.910399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:04.910464  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:04.946196  358357 cri.go:89] found id: ""
	I1205 21:43:04.946233  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.946245  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:04.946252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:04.946319  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:04.982119  358357 cri.go:89] found id: ""
	I1205 21:43:04.982150  358357 logs.go:282] 0 containers: []
	W1205 21:43:04.982164  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:04.982173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:04.982245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:05.018296  358357 cri.go:89] found id: ""
	I1205 21:43:05.018334  358357 logs.go:282] 0 containers: []
	W1205 21:43:05.018346  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:05.018359  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:05.018376  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:05.070674  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:05.070729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:05.085822  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:05.085858  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:05.163359  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:05.163385  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:05.163400  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:05.243524  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:05.243581  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:03.608201  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.108243  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:03.992084  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.487041  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:06.900400  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:09.400212  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:07.785152  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:07.799248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:07.799327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:07.836150  358357 cri.go:89] found id: ""
	I1205 21:43:07.836204  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.836215  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:07.836222  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:07.836287  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:07.873025  358357 cri.go:89] found id: ""
	I1205 21:43:07.873059  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.873068  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:07.873074  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:07.873133  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:07.913228  358357 cri.go:89] found id: ""
	I1205 21:43:07.913257  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.913266  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:07.913272  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:07.913332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:07.953284  358357 cri.go:89] found id: ""
	I1205 21:43:07.953316  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.953327  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:07.953337  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:07.953405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:07.990261  358357 cri.go:89] found id: ""
	I1205 21:43:07.990295  358357 logs.go:282] 0 containers: []
	W1205 21:43:07.990308  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:07.990317  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:07.990414  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:08.032002  358357 cri.go:89] found id: ""
	I1205 21:43:08.032029  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.032037  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:08.032043  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:08.032095  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:08.066422  358357 cri.go:89] found id: ""
	I1205 21:43:08.066456  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.066464  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:08.066471  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:08.066526  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:08.103696  358357 cri.go:89] found id: ""
	I1205 21:43:08.103732  358357 logs.go:282] 0 containers: []
	W1205 21:43:08.103745  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:08.103757  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:08.103793  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:08.157218  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:08.157264  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:08.172145  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:08.172191  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:08.247452  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:08.247479  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:08.247493  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:08.326928  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:08.326972  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:08.111002  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.608479  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:08.985124  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.985701  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:11.400591  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.898978  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:10.866350  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:10.880013  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:10.880084  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:10.914657  358357 cri.go:89] found id: ""
	I1205 21:43:10.914698  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.914712  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:10.914721  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:10.914780  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:10.950154  358357 cri.go:89] found id: ""
	I1205 21:43:10.950187  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.950196  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:10.950203  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:10.950267  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:10.985474  358357 cri.go:89] found id: ""
	I1205 21:43:10.985508  358357 logs.go:282] 0 containers: []
	W1205 21:43:10.985520  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:10.985528  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:10.985602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:11.021324  358357 cri.go:89] found id: ""
	I1205 21:43:11.021352  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.021361  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:11.021367  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:11.021429  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:11.056112  358357 cri.go:89] found id: ""
	I1205 21:43:11.056140  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.056149  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:11.056155  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:11.056210  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:11.090696  358357 cri.go:89] found id: ""
	I1205 21:43:11.090729  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.090739  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:11.090746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:11.090809  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:11.126706  358357 cri.go:89] found id: ""
	I1205 21:43:11.126741  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.126754  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:11.126762  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:11.126832  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:11.162759  358357 cri.go:89] found id: ""
	I1205 21:43:11.162790  358357 logs.go:282] 0 containers: []
	W1205 21:43:11.162800  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:11.162812  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:11.162827  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:11.215941  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:11.215995  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:11.229338  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:11.229378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:11.300339  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:11.300373  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:11.300389  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:11.378797  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:11.378852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.919092  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:13.935332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:13.935418  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:13.970759  358357 cri.go:89] found id: ""
	I1205 21:43:13.970790  358357 logs.go:282] 0 containers: []
	W1205 21:43:13.970802  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:13.970810  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:13.970879  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:14.017105  358357 cri.go:89] found id: ""
	I1205 21:43:14.017140  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.017152  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:14.017159  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:14.017228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:14.056797  358357 cri.go:89] found id: ""
	I1205 21:43:14.056831  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.056843  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:14.056850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:14.056922  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:14.090687  358357 cri.go:89] found id: ""
	I1205 21:43:14.090727  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.090740  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:14.090747  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:14.090808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:14.128280  358357 cri.go:89] found id: ""
	I1205 21:43:14.128320  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.128333  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:14.128341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:14.128410  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:14.167386  358357 cri.go:89] found id: ""
	I1205 21:43:14.167420  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.167428  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:14.167435  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:14.167498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:14.203376  358357 cri.go:89] found id: ""
	I1205 21:43:14.203408  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.203419  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:14.203427  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:14.203495  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:14.238271  358357 cri.go:89] found id: ""
	I1205 21:43:14.238308  358357 logs.go:282] 0 containers: []
	W1205 21:43:14.238319  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:14.238333  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:14.238353  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:14.290565  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:14.290609  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:14.305062  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:14.305106  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:14.375343  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:14.375375  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:14.375392  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:14.456771  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:14.456826  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:13.107746  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.607571  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:13.484545  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.485414  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:15.899518  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.900034  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:16.997441  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:17.011258  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:17.011344  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:17.045557  358357 cri.go:89] found id: ""
	I1205 21:43:17.045599  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.045613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:17.045623  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:17.045689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:17.080094  358357 cri.go:89] found id: ""
	I1205 21:43:17.080131  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.080144  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:17.080152  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:17.080228  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:17.113336  358357 cri.go:89] found id: ""
	I1205 21:43:17.113375  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.113387  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:17.113396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:17.113461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:17.147392  358357 cri.go:89] found id: ""
	I1205 21:43:17.147431  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.147443  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:17.147452  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:17.147521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:17.182308  358357 cri.go:89] found id: ""
	I1205 21:43:17.182359  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.182370  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:17.182376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:17.182443  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:17.216848  358357 cri.go:89] found id: ""
	I1205 21:43:17.216886  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.216917  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:17.216926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:17.216999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:17.251515  358357 cri.go:89] found id: ""
	I1205 21:43:17.251553  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.251565  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:17.251573  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:17.251645  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:17.284664  358357 cri.go:89] found id: ""
	I1205 21:43:17.284691  358357 logs.go:282] 0 containers: []
	W1205 21:43:17.284700  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:17.284711  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:17.284723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:17.335642  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:17.335685  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:17.349100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:17.349133  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:17.427338  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:17.427362  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:17.427378  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:17.507314  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:17.507366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:20.049650  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:20.063058  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:20.063152  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:20.096637  358357 cri.go:89] found id: ""
	I1205 21:43:20.096674  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.096687  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:20.096696  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:20.096761  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:20.134010  358357 cri.go:89] found id: ""
	I1205 21:43:20.134041  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.134054  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:20.134061  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:20.134128  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:20.173232  358357 cri.go:89] found id: ""
	I1205 21:43:20.173272  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.173292  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:20.173301  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:20.173374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:20.208411  358357 cri.go:89] found id: ""
	I1205 21:43:20.208441  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.208451  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:20.208457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:20.208515  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:20.244682  358357 cri.go:89] found id: ""
	I1205 21:43:20.244715  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.244729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:20.244737  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:20.244835  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:20.278659  358357 cri.go:89] found id: ""
	I1205 21:43:20.278692  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.278701  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:20.278708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:20.278773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:20.313894  358357 cri.go:89] found id: ""
	I1205 21:43:20.313963  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.313978  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:20.313986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:20.314049  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:20.351924  358357 cri.go:89] found id: ""
	I1205 21:43:20.351957  358357 logs.go:282] 0 containers: []
	W1205 21:43:20.351966  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:20.351976  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:20.351992  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:20.365712  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:20.365752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:20.448062  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:20.448096  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:20.448115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:20.530550  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:20.530593  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:17.611740  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.107637  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.108801  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:17.985246  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:19.985378  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.484721  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.400560  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:22.400956  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.899642  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:20.573612  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:20.573644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.128630  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:23.141915  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:23.141991  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:23.177986  358357 cri.go:89] found id: ""
	I1205 21:43:23.178024  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.178033  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:23.178040  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:23.178104  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:23.211957  358357 cri.go:89] found id: ""
	I1205 21:43:23.211995  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.212005  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:23.212016  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:23.212075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:23.247747  358357 cri.go:89] found id: ""
	I1205 21:43:23.247775  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.247783  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:23.247789  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:23.247847  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:23.282556  358357 cri.go:89] found id: ""
	I1205 21:43:23.282602  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.282616  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:23.282624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:23.282689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:23.317629  358357 cri.go:89] found id: ""
	I1205 21:43:23.317661  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.317670  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:23.317676  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:23.317749  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:23.352085  358357 cri.go:89] found id: ""
	I1205 21:43:23.352114  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.352123  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:23.352130  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:23.352190  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:23.391452  358357 cri.go:89] found id: ""
	I1205 21:43:23.391483  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.391495  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:23.391503  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:23.391587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:23.427325  358357 cri.go:89] found id: ""
	I1205 21:43:23.427361  358357 logs.go:282] 0 containers: []
	W1205 21:43:23.427370  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:23.427380  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:23.427395  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:23.502923  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:23.502954  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:23.502970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:23.588869  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:23.588918  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:23.626986  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:23.627029  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:23.677290  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:23.677343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:24.607867  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.609049  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:24.484755  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.486039  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.899834  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:29.400266  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:26.191893  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:26.206289  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:26.206376  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:26.244696  358357 cri.go:89] found id: ""
	I1205 21:43:26.244726  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.244739  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:26.244748  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:26.244818  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:26.277481  358357 cri.go:89] found id: ""
	I1205 21:43:26.277509  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.277519  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:26.277526  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:26.277602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:26.312648  358357 cri.go:89] found id: ""
	I1205 21:43:26.312771  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.312807  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:26.312819  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:26.312897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:26.348986  358357 cri.go:89] found id: ""
	I1205 21:43:26.349017  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.349026  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:26.349034  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:26.349111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:26.382552  358357 cri.go:89] found id: ""
	I1205 21:43:26.382582  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.382591  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:26.382597  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:26.382667  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:26.419741  358357 cri.go:89] found id: ""
	I1205 21:43:26.419780  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.419791  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:26.419798  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:26.419860  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:26.458604  358357 cri.go:89] found id: ""
	I1205 21:43:26.458639  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.458649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:26.458656  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:26.458716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:26.492547  358357 cri.go:89] found id: ""
	I1205 21:43:26.492575  358357 logs.go:282] 0 containers: []
	W1205 21:43:26.492589  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:26.492600  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:26.492614  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:26.543734  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:26.543784  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:26.557495  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:26.557529  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:26.632104  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:26.632135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:26.632155  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:26.711876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:26.711929  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.251703  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:29.265023  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:29.265108  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:29.301837  358357 cri.go:89] found id: ""
	I1205 21:43:29.301875  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.301910  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:29.301922  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:29.301994  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:29.335968  358357 cri.go:89] found id: ""
	I1205 21:43:29.336001  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.336015  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:29.336024  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:29.336090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:29.370471  358357 cri.go:89] found id: ""
	I1205 21:43:29.370500  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.370512  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:29.370521  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:29.370585  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:29.406408  358357 cri.go:89] found id: ""
	I1205 21:43:29.406443  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.406456  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:29.406464  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:29.406537  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:29.442657  358357 cri.go:89] found id: ""
	I1205 21:43:29.442689  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.442700  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:29.442708  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:29.442776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:29.485257  358357 cri.go:89] found id: ""
	I1205 21:43:29.485291  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.485302  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:29.485311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:29.485374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:29.520186  358357 cri.go:89] found id: ""
	I1205 21:43:29.520218  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.520229  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:29.520238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:29.520312  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:29.555875  358357 cri.go:89] found id: ""
	I1205 21:43:29.555908  358357 logs.go:282] 0 containers: []
	W1205 21:43:29.555920  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:29.555931  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:29.555949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:29.569277  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:29.569312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:29.643777  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:29.643810  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:29.643828  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:29.721856  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:29.721932  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:29.763402  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:29.763437  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:29.108987  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.608186  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:28.486609  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:30.985559  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:31.899471  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:34.399084  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.316122  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:32.329958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:32.330122  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:32.362518  358357 cri.go:89] found id: ""
	I1205 21:43:32.362562  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.362575  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:32.362585  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:32.362655  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:32.396558  358357 cri.go:89] found id: ""
	I1205 21:43:32.396650  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.396668  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:32.396683  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:32.396759  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:32.430931  358357 cri.go:89] found id: ""
	I1205 21:43:32.430958  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.430966  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:32.430972  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:32.431025  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:32.468557  358357 cri.go:89] found id: ""
	I1205 21:43:32.468597  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.468607  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:32.468613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:32.468698  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:32.503548  358357 cri.go:89] found id: ""
	I1205 21:43:32.503586  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.503599  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:32.503608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:32.503680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:32.538516  358357 cri.go:89] found id: ""
	I1205 21:43:32.538559  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.538573  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:32.538582  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:32.538658  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:32.570768  358357 cri.go:89] found id: ""
	I1205 21:43:32.570804  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.570817  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:32.570886  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:32.570963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:32.604812  358357 cri.go:89] found id: ""
	I1205 21:43:32.604851  358357 logs.go:282] 0 containers: []
	W1205 21:43:32.604864  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:32.604876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:32.604899  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:32.667787  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:32.667831  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:32.681437  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:32.681472  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:32.761208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:32.761235  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:32.761249  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:32.844838  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:32.844882  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:35.386488  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:35.401884  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:35.401987  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:35.437976  358357 cri.go:89] found id: ""
	I1205 21:43:35.438007  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.438017  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:35.438023  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:35.438089  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:35.478157  358357 cri.go:89] found id: ""
	I1205 21:43:35.478202  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.478214  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:35.478222  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:35.478292  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:35.516671  358357 cri.go:89] found id: ""
	I1205 21:43:35.516717  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.516731  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:35.516805  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:35.516897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:35.551255  358357 cri.go:89] found id: ""
	I1205 21:43:35.551284  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.551295  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:35.551302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:35.551357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:34.108153  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.108668  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:32.986075  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.484135  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:37.485074  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:36.399714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:38.900550  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:35.588294  358357 cri.go:89] found id: ""
	I1205 21:43:35.588325  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.588334  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:35.588341  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:35.588405  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:35.622659  358357 cri.go:89] found id: ""
	I1205 21:43:35.622691  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.622700  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:35.622707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:35.622774  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:35.656864  358357 cri.go:89] found id: ""
	I1205 21:43:35.656893  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.656901  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:35.656908  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:35.656961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:35.697507  358357 cri.go:89] found id: ""
	I1205 21:43:35.697554  358357 logs.go:282] 0 containers: []
	W1205 21:43:35.697567  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:35.697579  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:35.697599  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:35.745717  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:35.745758  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:35.759004  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:35.759036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:35.828958  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:35.828992  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:35.829010  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:35.905023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:35.905063  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.445492  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:38.459922  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:38.460006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:38.495791  358357 cri.go:89] found id: ""
	I1205 21:43:38.495829  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.495840  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:38.495849  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:38.495918  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:38.530056  358357 cri.go:89] found id: ""
	I1205 21:43:38.530088  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.530097  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:38.530104  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:38.530177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:38.566865  358357 cri.go:89] found id: ""
	I1205 21:43:38.566896  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.566905  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:38.566912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:38.566983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:38.600870  358357 cri.go:89] found id: ""
	I1205 21:43:38.600905  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.600918  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:38.600926  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:38.600995  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:38.639270  358357 cri.go:89] found id: ""
	I1205 21:43:38.639308  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.639317  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:38.639324  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:38.639395  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:38.678671  358357 cri.go:89] found id: ""
	I1205 21:43:38.678720  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.678736  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:38.678745  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:38.678812  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:38.715126  358357 cri.go:89] found id: ""
	I1205 21:43:38.715160  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.715169  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:38.715176  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:38.715236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:38.750621  358357 cri.go:89] found id: ""
	I1205 21:43:38.750660  358357 logs.go:282] 0 containers: []
	W1205 21:43:38.750674  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:38.750688  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:38.750706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:38.801336  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:38.801386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:38.817206  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:38.817243  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:38.899496  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:38.899526  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:38.899542  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:38.987043  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:38.987096  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:38.608744  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.107606  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:39.486171  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.984199  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.400104  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:43.898622  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:41.535073  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:41.550469  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:41.550543  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:41.591727  358357 cri.go:89] found id: ""
	I1205 21:43:41.591768  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.591781  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:41.591790  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:41.591861  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:41.628657  358357 cri.go:89] found id: ""
	I1205 21:43:41.628691  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.628703  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:41.628711  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:41.628782  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:41.674165  358357 cri.go:89] found id: ""
	I1205 21:43:41.674210  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.674224  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:41.674238  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:41.674318  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:41.713785  358357 cri.go:89] found id: ""
	I1205 21:43:41.713836  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.713856  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:41.713866  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:41.713959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:41.752119  358357 cri.go:89] found id: ""
	I1205 21:43:41.752152  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.752162  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:41.752169  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:41.752224  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:41.787379  358357 cri.go:89] found id: ""
	I1205 21:43:41.787414  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.787427  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:41.787439  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:41.787517  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:41.827473  358357 cri.go:89] found id: ""
	I1205 21:43:41.827505  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.827516  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:41.827523  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:41.827580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:41.864685  358357 cri.go:89] found id: ""
	I1205 21:43:41.864724  358357 logs.go:282] 0 containers: []
	W1205 21:43:41.864737  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:41.864750  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:41.864767  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:41.919751  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:41.919797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:41.933494  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:41.933527  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:42.007384  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:42.007478  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:42.007516  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:42.085929  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:42.085974  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:44.625416  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:44.640399  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:44.640466  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:44.676232  358357 cri.go:89] found id: ""
	I1205 21:43:44.676279  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.676292  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:44.676302  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:44.676386  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:44.714304  358357 cri.go:89] found id: ""
	I1205 21:43:44.714345  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.714358  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:44.714368  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:44.714438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:44.748091  358357 cri.go:89] found id: ""
	I1205 21:43:44.748130  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.748141  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:44.748149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:44.748225  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:44.789620  358357 cri.go:89] found id: ""
	I1205 21:43:44.789712  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.789737  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:44.789746  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:44.789808  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:44.829941  358357 cri.go:89] found id: ""
	I1205 21:43:44.829987  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.829999  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:44.830008  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:44.830080  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:44.876378  358357 cri.go:89] found id: ""
	I1205 21:43:44.876412  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.876424  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:44.876433  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:44.876503  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:44.913556  358357 cri.go:89] found id: ""
	I1205 21:43:44.913590  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.913602  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:44.913610  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:44.913676  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:44.947592  358357 cri.go:89] found id: ""
	I1205 21:43:44.947625  358357 logs.go:282] 0 containers: []
	W1205 21:43:44.947634  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:44.947643  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:44.947658  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:44.960447  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:44.960478  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:45.035679  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:45.035716  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:45.035731  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:45.115015  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:45.115055  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:45.152866  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:45.152901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:43.108800  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.109600  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:44.483302  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:46.484569  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:45.899283  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.900475  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:47.703949  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:47.717705  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:47.717775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:47.753877  358357 cri.go:89] found id: ""
	I1205 21:43:47.753920  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.753933  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:47.753946  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:47.754006  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:47.790673  358357 cri.go:89] found id: ""
	I1205 21:43:47.790707  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.790718  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:47.790725  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:47.790784  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:47.829957  358357 cri.go:89] found id: ""
	I1205 21:43:47.829999  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.830013  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:47.830021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:47.830094  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:47.869182  358357 cri.go:89] found id: ""
	I1205 21:43:47.869221  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.869235  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:47.869251  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:47.869337  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:47.906549  358357 cri.go:89] found id: ""
	I1205 21:43:47.906582  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.906592  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:47.906598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:47.906674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:47.944594  358357 cri.go:89] found id: ""
	I1205 21:43:47.944622  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.944631  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:47.944637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:47.944699  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:47.981461  358357 cri.go:89] found id: ""
	I1205 21:43:47.981499  358357 logs.go:282] 0 containers: []
	W1205 21:43:47.981512  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:47.981520  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:47.981593  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:48.016561  358357 cri.go:89] found id: ""
	I1205 21:43:48.016597  358357 logs.go:282] 0 containers: []
	W1205 21:43:48.016607  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:48.016617  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:48.016631  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:48.097690  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:48.097740  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:48.140272  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:48.140318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:48.194365  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:48.194415  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:48.208715  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:48.208750  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:48.283159  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:47.607945  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.108918  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:48.984798  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.986257  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.399207  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:52.899857  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.899976  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:50.784026  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:50.812440  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:50.812524  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:50.866971  358357 cri.go:89] found id: ""
	I1205 21:43:50.867009  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.867022  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:50.867030  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:50.867100  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:50.910640  358357 cri.go:89] found id: ""
	I1205 21:43:50.910675  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.910686  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:50.910692  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:50.910767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:50.944766  358357 cri.go:89] found id: ""
	I1205 21:43:50.944795  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.944803  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:50.944811  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:50.944880  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:50.978126  358357 cri.go:89] found id: ""
	I1205 21:43:50.978167  358357 logs.go:282] 0 containers: []
	W1205 21:43:50.978178  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:50.978185  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:50.978250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:51.015639  358357 cri.go:89] found id: ""
	I1205 21:43:51.015682  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.015693  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:51.015700  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:51.015776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:51.050114  358357 cri.go:89] found id: ""
	I1205 21:43:51.050156  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.050166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:51.050180  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:51.050244  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:51.088492  358357 cri.go:89] found id: ""
	I1205 21:43:51.088523  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.088533  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:51.088540  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:51.088599  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:51.125732  358357 cri.go:89] found id: ""
	I1205 21:43:51.125768  358357 logs.go:282] 0 containers: []
	W1205 21:43:51.125778  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:51.125789  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:51.125803  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:51.178278  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:51.178325  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:51.192954  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:51.192990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:51.263378  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:51.263403  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:51.263416  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:51.341416  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:51.341463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:53.882599  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:53.895846  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:53.895961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:53.929422  358357 cri.go:89] found id: ""
	I1205 21:43:53.929465  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.929480  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:53.929490  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:53.929568  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:53.965935  358357 cri.go:89] found id: ""
	I1205 21:43:53.965976  358357 logs.go:282] 0 containers: []
	W1205 21:43:53.965990  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:53.966001  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:53.966075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:54.011360  358357 cri.go:89] found id: ""
	I1205 21:43:54.011394  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.011406  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:54.011412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:54.011483  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:54.049333  358357 cri.go:89] found id: ""
	I1205 21:43:54.049368  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.049377  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:54.049385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:54.049445  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:54.087228  358357 cri.go:89] found id: ""
	I1205 21:43:54.087266  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.087279  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:54.087287  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:54.087348  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:54.122795  358357 cri.go:89] found id: ""
	I1205 21:43:54.122832  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.122845  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:54.122853  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:54.122914  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:54.157622  358357 cri.go:89] found id: ""
	I1205 21:43:54.157657  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.157666  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:54.157672  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:54.157734  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:54.195574  358357 cri.go:89] found id: ""
	I1205 21:43:54.195610  358357 logs.go:282] 0 containers: []
	W1205 21:43:54.195624  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:54.195638  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:54.195659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:43:54.235353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:54.235403  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:54.292275  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:54.292338  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:54.306808  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:54.306842  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:54.380414  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:54.380440  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:54.380455  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:52.608190  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:54.609219  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.109413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:53.484775  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:55.985011  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:57.402445  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:59.900093  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:56.956848  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:43:56.969840  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:43:56.969954  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:43:57.004299  358357 cri.go:89] found id: ""
	I1205 21:43:57.004405  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.004426  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:43:57.004434  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:43:57.004510  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:43:57.039150  358357 cri.go:89] found id: ""
	I1205 21:43:57.039176  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.039185  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:43:57.039192  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:43:57.039245  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:43:57.075259  358357 cri.go:89] found id: ""
	I1205 21:43:57.075299  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.075313  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:43:57.075331  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:43:57.075407  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:43:57.111445  358357 cri.go:89] found id: ""
	I1205 21:43:57.111474  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.111492  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:43:57.111500  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:43:57.111580  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:43:57.152495  358357 cri.go:89] found id: ""
	I1205 21:43:57.152527  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.152536  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:43:57.152548  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:43:57.152606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:43:57.188070  358357 cri.go:89] found id: ""
	I1205 21:43:57.188106  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.188119  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:43:57.188126  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:43:57.188198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:43:57.222213  358357 cri.go:89] found id: ""
	I1205 21:43:57.222245  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.222260  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:43:57.222268  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:43:57.222354  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:43:57.254072  358357 cri.go:89] found id: ""
	I1205 21:43:57.254101  358357 logs.go:282] 0 containers: []
	W1205 21:43:57.254110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:43:57.254120  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:43:57.254136  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:57.307411  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:43:57.307456  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:43:57.323095  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:43:57.323130  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:43:57.400894  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:43:57.400928  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:43:57.400951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:43:57.479628  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:43:57.479670  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.018936  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:00.032067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:00.032149  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:00.065807  358357 cri.go:89] found id: ""
	I1205 21:44:00.065835  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.065844  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:00.065851  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:00.065931  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:00.100810  358357 cri.go:89] found id: ""
	I1205 21:44:00.100839  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.100847  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:00.100854  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:00.100920  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:00.136341  358357 cri.go:89] found id: ""
	I1205 21:44:00.136375  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.136388  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:00.136396  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:00.136454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:00.173170  358357 cri.go:89] found id: ""
	I1205 21:44:00.173206  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.173227  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:00.173235  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:00.173332  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:00.208319  358357 cri.go:89] found id: ""
	I1205 21:44:00.208351  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.208363  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:00.208371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:00.208438  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:00.250416  358357 cri.go:89] found id: ""
	I1205 21:44:00.250449  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.250463  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:00.250474  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:00.250546  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:00.285170  358357 cri.go:89] found id: ""
	I1205 21:44:00.285200  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.285212  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:00.285221  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:00.285290  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:00.320837  358357 cri.go:89] found id: ""
	I1205 21:44:00.320870  358357 logs.go:282] 0 containers: []
	W1205 21:44:00.320879  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:00.320889  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:00.320901  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:00.334341  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:00.334375  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:00.400547  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:00.400575  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:00.400592  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:00.476133  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:00.476181  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:00.514760  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:00.514795  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:43:59.606994  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:01.608870  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:43:58.484178  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:00.484913  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.399767  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.900007  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:03.067793  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:03.081940  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:03.082023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:03.118846  358357 cri.go:89] found id: ""
	I1205 21:44:03.118886  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.118897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:03.118905  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:03.118962  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:03.156092  358357 cri.go:89] found id: ""
	I1205 21:44:03.156128  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.156140  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:03.156148  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:03.156219  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:03.189783  358357 cri.go:89] found id: ""
	I1205 21:44:03.189824  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.189837  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:03.189845  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:03.189913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:03.225034  358357 cri.go:89] found id: ""
	I1205 21:44:03.225069  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.225081  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:03.225095  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:03.225177  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:03.258959  358357 cri.go:89] found id: ""
	I1205 21:44:03.258991  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.259003  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:03.259011  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:03.259075  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:03.292871  358357 cri.go:89] found id: ""
	I1205 21:44:03.292907  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.292920  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:03.292927  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:03.292983  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:03.327659  358357 cri.go:89] found id: ""
	I1205 21:44:03.327707  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.327730  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:03.327738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:03.327810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:03.369576  358357 cri.go:89] found id: ""
	I1205 21:44:03.369614  358357 logs.go:282] 0 containers: []
	W1205 21:44:03.369627  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:03.369641  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:03.369656  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:03.424527  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:03.424580  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:03.438199  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:03.438231  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:03.509107  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:03.509139  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:03.509158  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:03.595637  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:03.595717  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:04.108126  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.109347  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:02.984401  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:04.987542  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:07.484630  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.900439  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.400464  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:06.135947  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:06.149530  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:06.149602  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:06.185659  358357 cri.go:89] found id: ""
	I1205 21:44:06.185692  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.185702  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:06.185709  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:06.185775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:06.223238  358357 cri.go:89] found id: ""
	I1205 21:44:06.223281  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.223291  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:06.223298  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:06.223357  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:06.261842  358357 cri.go:89] found id: ""
	I1205 21:44:06.261884  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.261911  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:06.261920  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:06.261996  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:06.304416  358357 cri.go:89] found id: ""
	I1205 21:44:06.304455  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.304466  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:06.304475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:06.304554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:06.339676  358357 cri.go:89] found id: ""
	I1205 21:44:06.339711  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.339723  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:06.339732  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:06.339785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:06.375594  358357 cri.go:89] found id: ""
	I1205 21:44:06.375630  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.375640  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:06.375647  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:06.375722  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:06.410953  358357 cri.go:89] found id: ""
	I1205 21:44:06.410986  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.410996  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:06.411002  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:06.411069  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:06.445559  358357 cri.go:89] found id: ""
	I1205 21:44:06.445590  358357 logs.go:282] 0 containers: []
	W1205 21:44:06.445603  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:06.445617  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:06.445634  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:06.497474  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:06.497534  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:06.512032  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:06.512065  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:06.582809  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:06.582845  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:06.582862  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:06.663652  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:06.663696  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.204305  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:09.217648  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:09.217738  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:09.255398  358357 cri.go:89] found id: ""
	I1205 21:44:09.255441  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.255454  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:09.255463  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:09.255533  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:09.290268  358357 cri.go:89] found id: ""
	I1205 21:44:09.290296  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.290310  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:09.290316  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:09.290384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:09.324546  358357 cri.go:89] found id: ""
	I1205 21:44:09.324586  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.324599  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:09.324608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:09.324684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:09.358619  358357 cri.go:89] found id: ""
	I1205 21:44:09.358665  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.358677  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:09.358686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:09.358757  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:09.395697  358357 cri.go:89] found id: ""
	I1205 21:44:09.395736  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.395749  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:09.395758  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:09.395838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:09.437064  358357 cri.go:89] found id: ""
	I1205 21:44:09.437099  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.437108  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:09.437115  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:09.437172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:09.472330  358357 cri.go:89] found id: ""
	I1205 21:44:09.472368  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.472380  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:09.472388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:09.472460  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:09.507468  358357 cri.go:89] found id: ""
	I1205 21:44:09.507510  358357 logs.go:282] 0 containers: []
	W1205 21:44:09.507524  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:09.507538  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:09.507555  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:09.583640  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:09.583683  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:09.625830  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:09.625876  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:09.681668  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:09.681720  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:09.695305  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:09.695346  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:09.770136  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:08.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:10.608715  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:09.485975  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.983682  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:11.899933  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:14.399690  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:12.270576  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:12.287283  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:12.287367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:12.320855  358357 cri.go:89] found id: ""
	I1205 21:44:12.320890  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.320902  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:12.320911  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:12.320981  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:12.354550  358357 cri.go:89] found id: ""
	I1205 21:44:12.354595  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.354608  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:12.354617  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:12.354685  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:12.388487  358357 cri.go:89] found id: ""
	I1205 21:44:12.388519  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.388532  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:12.388542  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:12.388600  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:12.424338  358357 cri.go:89] found id: ""
	I1205 21:44:12.424366  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.424375  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:12.424382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:12.424448  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:12.465997  358357 cri.go:89] found id: ""
	I1205 21:44:12.466028  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.466038  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:12.466044  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:12.466111  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:12.503567  358357 cri.go:89] found id: ""
	I1205 21:44:12.503602  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.503616  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:12.503625  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:12.503700  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:12.538669  358357 cri.go:89] found id: ""
	I1205 21:44:12.538696  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.538705  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:12.538711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:12.538763  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:12.576375  358357 cri.go:89] found id: ""
	I1205 21:44:12.576416  358357 logs.go:282] 0 containers: []
	W1205 21:44:12.576429  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:12.576442  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:12.576458  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:12.625471  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:12.625512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:12.639689  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:12.639729  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:12.710873  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:12.710896  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:12.710936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:12.789800  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:12.789841  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.331451  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:15.344354  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:15.344441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:15.378596  358357 cri.go:89] found id: ""
	I1205 21:44:15.378631  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.378640  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:15.378647  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:15.378718  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:15.418342  358357 cri.go:89] found id: ""
	I1205 21:44:15.418373  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.418386  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:15.418394  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:15.418461  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:15.454130  358357 cri.go:89] found id: ""
	I1205 21:44:15.454167  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.454179  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:15.454187  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:15.454269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:15.490777  358357 cri.go:89] found id: ""
	I1205 21:44:15.490813  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.490824  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:15.490831  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:15.490887  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:15.523706  358357 cri.go:89] found id: ""
	I1205 21:44:15.523747  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.523760  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:15.523768  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:15.523839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:15.559019  358357 cri.go:89] found id: ""
	I1205 21:44:15.559049  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.559058  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:15.559065  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:15.559121  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:13.107960  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.607620  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:13.984413  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.984615  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:16.401714  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.900883  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:15.592611  358357 cri.go:89] found id: ""
	I1205 21:44:15.592640  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.592649  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:15.592655  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:15.592707  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:15.628295  358357 cri.go:89] found id: ""
	I1205 21:44:15.628333  358357 logs.go:282] 0 containers: []
	W1205 21:44:15.628344  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:15.628354  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:15.628366  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:15.711123  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:15.711174  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:15.757486  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:15.757519  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:15.805750  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:15.805797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:15.820685  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:15.820722  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:15.887073  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.388126  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:18.403082  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:18.403165  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:18.436195  358357 cri.go:89] found id: ""
	I1205 21:44:18.436230  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.436243  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:18.436255  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:18.436346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:18.471756  358357 cri.go:89] found id: ""
	I1205 21:44:18.471788  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.471797  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:18.471804  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:18.471863  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:18.510693  358357 cri.go:89] found id: ""
	I1205 21:44:18.510741  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.510754  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:18.510763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:18.510831  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:18.551976  358357 cri.go:89] found id: ""
	I1205 21:44:18.552014  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.552027  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:18.552036  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:18.552105  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:18.587679  358357 cri.go:89] found id: ""
	I1205 21:44:18.587716  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.587729  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:18.587738  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:18.587810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:18.631487  358357 cri.go:89] found id: ""
	I1205 21:44:18.631519  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.631529  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:18.631547  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:18.631620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:18.663618  358357 cri.go:89] found id: ""
	I1205 21:44:18.663646  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.663656  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:18.663665  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:18.663725  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:18.697864  358357 cri.go:89] found id: ""
	I1205 21:44:18.697894  358357 logs.go:282] 0 containers: []
	W1205 21:44:18.697929  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:18.697943  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:18.697960  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:18.710777  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:18.710808  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:18.784195  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:18.784222  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:18.784241  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:18.863023  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:18.863071  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:18.903228  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:18.903267  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:18.106883  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.107752  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.110346  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:18.484897  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:20.983954  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.399201  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:23.400564  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:21.454547  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:21.468048  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:21.468131  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:21.501472  358357 cri.go:89] found id: ""
	I1205 21:44:21.501503  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.501512  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:21.501518  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:21.501576  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:21.536522  358357 cri.go:89] found id: ""
	I1205 21:44:21.536564  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.536579  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:21.536589  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:21.536653  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:21.570924  358357 cri.go:89] found id: ""
	I1205 21:44:21.570955  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.570965  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:21.570971  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:21.571039  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:21.607649  358357 cri.go:89] found id: ""
	I1205 21:44:21.607678  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.607688  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:21.607697  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:21.607766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:21.647025  358357 cri.go:89] found id: ""
	I1205 21:44:21.647052  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.647061  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:21.647067  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:21.647118  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:21.684418  358357 cri.go:89] found id: ""
	I1205 21:44:21.684460  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.684472  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:21.684481  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:21.684554  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:21.722093  358357 cri.go:89] found id: ""
	I1205 21:44:21.722129  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.722141  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:21.722149  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:21.722208  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:21.755757  358357 cri.go:89] found id: ""
	I1205 21:44:21.755794  358357 logs.go:282] 0 containers: []
	W1205 21:44:21.755807  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:21.755821  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:21.755839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:21.809049  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:21.809110  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:21.823336  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:21.823371  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:21.894389  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:21.894412  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:21.894428  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:21.980288  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:21.980336  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.522528  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:24.535496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:24.535587  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:24.570301  358357 cri.go:89] found id: ""
	I1205 21:44:24.570354  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.570369  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:24.570379  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:24.570452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:24.606310  358357 cri.go:89] found id: ""
	I1205 21:44:24.606340  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.606351  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:24.606358  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:24.606427  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:24.644078  358357 cri.go:89] found id: ""
	I1205 21:44:24.644183  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.644198  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:24.644208  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:24.644293  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:24.679685  358357 cri.go:89] found id: ""
	I1205 21:44:24.679719  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.679729  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:24.679736  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:24.679817  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:24.717070  358357 cri.go:89] found id: ""
	I1205 21:44:24.717180  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.717216  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:24.717236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:24.717309  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:24.757345  358357 cri.go:89] found id: ""
	I1205 21:44:24.757380  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.757393  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:24.757401  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:24.757480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:24.790795  358357 cri.go:89] found id: ""
	I1205 21:44:24.790823  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.790835  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:24.790850  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:24.790911  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:24.827238  358357 cri.go:89] found id: ""
	I1205 21:44:24.827276  358357 logs.go:282] 0 containers: []
	W1205 21:44:24.827290  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:24.827302  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:24.827318  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:24.876812  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:24.876861  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:24.916558  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:24.916604  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:24.990733  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:24.990764  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:24.990785  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:25.065792  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:25.065852  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:24.608796  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.107897  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:22.984109  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:24.984259  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:26.985689  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:25.899361  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.900251  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.900465  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:27.608859  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:27.622449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:27.622516  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:27.655675  358357 cri.go:89] found id: ""
	I1205 21:44:27.655704  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.655713  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:27.655718  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:27.655785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:27.689751  358357 cri.go:89] found id: ""
	I1205 21:44:27.689781  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.689789  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:27.689795  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:27.689870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:27.726811  358357 cri.go:89] found id: ""
	I1205 21:44:27.726842  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.726856  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:27.726865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:27.726930  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:27.759600  358357 cri.go:89] found id: ""
	I1205 21:44:27.759631  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.759653  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:27.759660  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:27.759716  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:27.791700  358357 cri.go:89] found id: ""
	I1205 21:44:27.791738  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.791751  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:27.791763  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:27.791828  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:27.827998  358357 cri.go:89] found id: ""
	I1205 21:44:27.828031  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.828039  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:27.828045  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:27.828102  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:27.861452  358357 cri.go:89] found id: ""
	I1205 21:44:27.861481  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.861490  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:27.861496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:27.861560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:27.896469  358357 cri.go:89] found id: ""
	I1205 21:44:27.896519  358357 logs.go:282] 0 containers: []
	W1205 21:44:27.896532  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:27.896545  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:27.896560  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:27.935274  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:27.935312  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:27.986078  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:27.986116  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:28.000432  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:28.000463  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:28.074500  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:28.074530  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:28.074549  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:29.107971  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.108444  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:29.483791  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:31.484249  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:32.399397  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:34.400078  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:30.660117  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:30.672827  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:30.672907  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:30.711952  358357 cri.go:89] found id: ""
	I1205 21:44:30.711983  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.711993  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:30.711999  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:30.712051  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:30.747513  358357 cri.go:89] found id: ""
	I1205 21:44:30.747548  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.747558  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:30.747567  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:30.747627  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:30.782830  358357 cri.go:89] found id: ""
	I1205 21:44:30.782867  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.782878  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:30.782887  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:30.782980  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:30.820054  358357 cri.go:89] found id: ""
	I1205 21:44:30.820098  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.820111  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:30.820123  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:30.820198  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:30.857325  358357 cri.go:89] found id: ""
	I1205 21:44:30.857362  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.857373  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:30.857382  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:30.857453  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:30.893105  358357 cri.go:89] found id: ""
	I1205 21:44:30.893227  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.893267  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:30.893281  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:30.893356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:30.932764  358357 cri.go:89] found id: ""
	I1205 21:44:30.932802  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.932815  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:30.932823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:30.932885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:30.968962  358357 cri.go:89] found id: ""
	I1205 21:44:30.968999  358357 logs.go:282] 0 containers: []
	W1205 21:44:30.969011  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:30.969023  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:30.969037  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:31.022152  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:31.022198  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:31.035418  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:31.035453  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:31.100989  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:31.101017  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:31.101030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:31.182034  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:31.182079  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:33.725770  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:33.740956  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:33.741040  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:33.779158  358357 cri.go:89] found id: ""
	I1205 21:44:33.779198  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.779210  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:33.779218  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:33.779280  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:33.814600  358357 cri.go:89] found id: ""
	I1205 21:44:33.814628  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.814641  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:33.814649  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:33.814710  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:33.850220  358357 cri.go:89] found id: ""
	I1205 21:44:33.850255  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.850267  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:33.850276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:33.850334  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:33.883737  358357 cri.go:89] found id: ""
	I1205 21:44:33.883765  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.883774  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:33.883781  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:33.883837  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:33.915007  358357 cri.go:89] found id: ""
	I1205 21:44:33.915046  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.915059  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:33.915068  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:33.915140  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:33.949038  358357 cri.go:89] found id: ""
	I1205 21:44:33.949077  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.949093  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:33.949102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:33.949172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:33.982396  358357 cri.go:89] found id: ""
	I1205 21:44:33.982425  358357 logs.go:282] 0 containers: []
	W1205 21:44:33.982437  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:33.982444  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:33.982521  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:34.020834  358357 cri.go:89] found id: ""
	I1205 21:44:34.020870  358357 logs.go:282] 0 containers: []
	W1205 21:44:34.020882  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:34.020894  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:34.020911  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:34.103184  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:34.103238  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:34.147047  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:34.147091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:34.196893  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:34.196942  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:34.211694  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:34.211730  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:34.282543  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:33.607930  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.108359  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:33.484472  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:35.484512  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.400821  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:38.899618  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:36.783278  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:36.798192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:36.798266  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:36.832685  358357 cri.go:89] found id: ""
	I1205 21:44:36.832723  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.832736  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:36.832743  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:36.832814  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:36.868040  358357 cri.go:89] found id: ""
	I1205 21:44:36.868074  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.868085  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:36.868092  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:36.868156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:36.901145  358357 cri.go:89] found id: ""
	I1205 21:44:36.901177  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.901186  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:36.901192  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:36.901248  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:36.935061  358357 cri.go:89] found id: ""
	I1205 21:44:36.935097  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.935107  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:36.935114  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:36.935183  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:36.984729  358357 cri.go:89] found id: ""
	I1205 21:44:36.984761  358357 logs.go:282] 0 containers: []
	W1205 21:44:36.984773  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:36.984782  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:36.984854  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:37.024644  358357 cri.go:89] found id: ""
	I1205 21:44:37.024684  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.024696  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:37.024706  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:37.024781  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:37.074238  358357 cri.go:89] found id: ""
	I1205 21:44:37.074275  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.074287  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:37.074295  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:37.074356  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:37.142410  358357 cri.go:89] found id: ""
	I1205 21:44:37.142444  358357 logs.go:282] 0 containers: []
	W1205 21:44:37.142457  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:37.142469  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:37.142488  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:37.192977  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:37.193018  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:37.206357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:37.206393  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:37.272336  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:37.272372  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:37.272390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:37.350655  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:37.350718  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:39.897421  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:39.911734  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:39.911806  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:39.950380  358357 cri.go:89] found id: ""
	I1205 21:44:39.950418  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.950432  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:39.950441  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:39.950511  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:39.987259  358357 cri.go:89] found id: ""
	I1205 21:44:39.987292  358357 logs.go:282] 0 containers: []
	W1205 21:44:39.987302  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:39.987308  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:39.987363  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:40.021052  358357 cri.go:89] found id: ""
	I1205 21:44:40.021081  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.021090  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:40.021096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:40.021167  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:40.057837  358357 cri.go:89] found id: ""
	I1205 21:44:40.057878  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.057919  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:40.057930  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:40.058004  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:40.094797  358357 cri.go:89] found id: ""
	I1205 21:44:40.094837  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.094853  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:40.094863  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:40.094932  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:40.130356  358357 cri.go:89] found id: ""
	I1205 21:44:40.130389  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.130398  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:40.130412  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:40.130467  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:40.164352  358357 cri.go:89] found id: ""
	I1205 21:44:40.164379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.164389  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:40.164394  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:40.164452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:40.197337  358357 cri.go:89] found id: ""
	I1205 21:44:40.197379  358357 logs.go:282] 0 containers: []
	W1205 21:44:40.197397  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:40.197408  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:40.197422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:40.210014  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:40.210051  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:40.280666  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:40.280691  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:40.280706  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:40.356849  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:40.356896  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:40.395202  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:40.395237  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:38.108650  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.607598  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:37.983908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:39.986080  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.484571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:40.900460  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:43.400889  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:42.950686  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:42.964078  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:42.964156  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:42.999252  358357 cri.go:89] found id: ""
	I1205 21:44:42.999286  358357 logs.go:282] 0 containers: []
	W1205 21:44:42.999299  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:42.999307  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:42.999374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:43.035393  358357 cri.go:89] found id: ""
	I1205 21:44:43.035430  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.035444  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:43.035451  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:43.035505  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:43.070649  358357 cri.go:89] found id: ""
	I1205 21:44:43.070681  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.070693  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:43.070703  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:43.070776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:43.103054  358357 cri.go:89] found id: ""
	I1205 21:44:43.103089  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.103101  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:43.103110  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:43.103175  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:43.138607  358357 cri.go:89] found id: ""
	I1205 21:44:43.138640  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.138653  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:43.138661  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:43.138733  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:43.172188  358357 cri.go:89] found id: ""
	I1205 21:44:43.172220  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.172234  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:43.172241  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:43.172313  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:43.204838  358357 cri.go:89] found id: ""
	I1205 21:44:43.204872  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.204882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:43.204891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:43.204960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:43.239985  358357 cri.go:89] found id: ""
	I1205 21:44:43.240011  358357 logs.go:282] 0 containers: []
	W1205 21:44:43.240020  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:43.240031  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:43.240052  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:43.291033  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:43.291088  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:43.305100  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:43.305152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:43.378988  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:43.379020  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:43.379054  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:43.466548  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:43.466602  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:42.607901  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.608143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.108131  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:44.984806  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.484110  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:45.899359  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:47.901854  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:46.007785  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:46.021496  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:46.021592  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:46.059259  358357 cri.go:89] found id: ""
	I1205 21:44:46.059296  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.059313  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:46.059321  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:46.059378  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:46.095304  358357 cri.go:89] found id: ""
	I1205 21:44:46.095336  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.095345  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:46.095351  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:46.095417  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:46.136792  358357 cri.go:89] found id: ""
	I1205 21:44:46.136822  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.136831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:46.136837  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:46.136891  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:46.169696  358357 cri.go:89] found id: ""
	I1205 21:44:46.169726  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.169735  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:46.169742  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:46.169810  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:46.205481  358357 cri.go:89] found id: ""
	I1205 21:44:46.205513  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.205524  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:46.205531  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:46.205586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:46.241112  358357 cri.go:89] found id: ""
	I1205 21:44:46.241157  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.241166  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:46.241173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:46.241233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:46.277129  358357 cri.go:89] found id: ""
	I1205 21:44:46.277159  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.277168  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:46.277174  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:46.277236  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:46.311196  358357 cri.go:89] found id: ""
	I1205 21:44:46.311238  358357 logs.go:282] 0 containers: []
	W1205 21:44:46.311250  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:46.311275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:46.311302  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:46.362581  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:46.362621  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:46.375887  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:46.375924  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:46.444563  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:46.444588  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:46.444605  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:46.525811  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:46.525857  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.065883  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:49.079482  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:49.079586  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:49.113676  358357 cri.go:89] found id: ""
	I1205 21:44:49.113706  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.113716  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:49.113722  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:49.113792  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:49.147653  358357 cri.go:89] found id: ""
	I1205 21:44:49.147686  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.147696  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:49.147702  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:49.147766  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:49.180934  358357 cri.go:89] found id: ""
	I1205 21:44:49.180981  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.180996  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:49.181004  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:49.181064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:49.214837  358357 cri.go:89] found id: ""
	I1205 21:44:49.214874  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.214883  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:49.214891  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:49.214960  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:49.249332  358357 cri.go:89] found id: ""
	I1205 21:44:49.249369  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.249380  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:49.249387  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:49.249451  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:49.284072  358357 cri.go:89] found id: ""
	I1205 21:44:49.284101  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.284109  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:49.284116  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:49.284169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:49.323559  358357 cri.go:89] found id: ""
	I1205 21:44:49.323597  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.323607  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:49.323614  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:49.323675  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:49.361219  358357 cri.go:89] found id: ""
	I1205 21:44:49.361253  358357 logs.go:282] 0 containers: []
	W1205 21:44:49.361263  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:49.361275  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:49.361291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:49.413099  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:49.413141  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:49.426610  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:49.426648  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:49.498740  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:49.498765  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:49.498794  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:49.578451  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:49.578495  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:49.608461  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.108005  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:49.484743  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:51.984842  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:50.401244  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.899546  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:54.899788  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:52.117874  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:52.131510  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:52.131601  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:52.169491  358357 cri.go:89] found id: ""
	I1205 21:44:52.169522  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.169535  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:52.169542  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:52.169617  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:52.202511  358357 cri.go:89] found id: ""
	I1205 21:44:52.202540  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.202556  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:52.202562  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:52.202630  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:52.239649  358357 cri.go:89] found id: ""
	I1205 21:44:52.239687  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.239699  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:52.239707  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:52.239771  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:52.274330  358357 cri.go:89] found id: ""
	I1205 21:44:52.274368  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.274380  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:52.274388  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:52.274452  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:52.310165  358357 cri.go:89] found id: ""
	I1205 21:44:52.310195  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.310207  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:52.310214  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:52.310284  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:52.344246  358357 cri.go:89] found id: ""
	I1205 21:44:52.344278  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.344293  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:52.344302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:52.344375  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:52.379475  358357 cri.go:89] found id: ""
	I1205 21:44:52.379508  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.379521  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:52.379529  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:52.379606  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:52.419952  358357 cri.go:89] found id: ""
	I1205 21:44:52.419981  358357 logs.go:282] 0 containers: []
	W1205 21:44:52.419990  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:52.420002  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:52.420014  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:52.471608  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:52.471659  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:52.486003  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:52.486036  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:52.560751  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:52.560786  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:52.560804  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:52.641284  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:52.641340  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:55.183102  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:55.197406  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:55.197502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:55.231335  358357 cri.go:89] found id: ""
	I1205 21:44:55.231365  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.231373  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:55.231381  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:55.231440  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:55.267877  358357 cri.go:89] found id: ""
	I1205 21:44:55.267907  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.267916  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:55.267923  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:55.267978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:55.302400  358357 cri.go:89] found id: ""
	I1205 21:44:55.302428  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.302437  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:55.302443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:55.302496  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:55.337878  358357 cri.go:89] found id: ""
	I1205 21:44:55.337932  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.337946  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:55.337954  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:55.338008  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:55.371877  358357 cri.go:89] found id: ""
	I1205 21:44:55.371920  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.371931  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:55.371941  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:55.372020  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:55.406914  358357 cri.go:89] found id: ""
	I1205 21:44:55.406947  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.406961  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:55.406970  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:55.407043  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:55.439910  358357 cri.go:89] found id: ""
	I1205 21:44:55.439940  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.439949  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:55.439955  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:55.440011  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:55.476886  358357 cri.go:89] found id: ""
	I1205 21:44:55.476916  358357 logs.go:282] 0 containers: []
	W1205 21:44:55.476925  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:55.476936  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:55.476949  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:55.531376  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:55.531422  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:55.545011  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:55.545050  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:44:54.108283  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.609653  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:53.985156  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:56.484908  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:57.400823  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:59.904973  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	W1205 21:44:55.620082  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:55.620122  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:55.620139  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:55.708465  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:55.708512  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.256289  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:44:58.269484  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:44:58.269560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:44:58.303846  358357 cri.go:89] found id: ""
	I1205 21:44:58.303884  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.303897  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:44:58.303906  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:44:58.303978  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:44:58.343160  358357 cri.go:89] found id: ""
	I1205 21:44:58.343190  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.343199  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:44:58.343205  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:44:58.343269  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:44:58.379207  358357 cri.go:89] found id: ""
	I1205 21:44:58.379240  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.379252  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:44:58.379261  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:44:58.379323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:44:58.415939  358357 cri.go:89] found id: ""
	I1205 21:44:58.415971  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.415981  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:44:58.415988  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:44:58.416046  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:44:58.450799  358357 cri.go:89] found id: ""
	I1205 21:44:58.450837  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.450848  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:44:58.450857  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:44:58.450927  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:44:58.487557  358357 cri.go:89] found id: ""
	I1205 21:44:58.487594  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.487602  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:44:58.487608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:44:58.487659  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:44:58.523932  358357 cri.go:89] found id: ""
	I1205 21:44:58.523960  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.523969  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:44:58.523976  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:44:58.524041  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:44:58.559140  358357 cri.go:89] found id: ""
	I1205 21:44:58.559169  358357 logs.go:282] 0 containers: []
	W1205 21:44:58.559179  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:44:58.559193  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:44:58.559209  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:44:58.643471  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:44:58.643520  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:44:58.683077  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:44:58.683118  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:44:58.736396  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:44:58.736441  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:44:58.751080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:44:58.751115  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:44:58.824208  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:44:59.108134  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.608008  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:44:58.984778  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.486140  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:02.400031  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:04.400426  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:01.324977  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:01.338088  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:01.338169  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:01.375859  358357 cri.go:89] found id: ""
	I1205 21:45:01.375913  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.375927  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:01.375936  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:01.376012  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:01.411327  358357 cri.go:89] found id: ""
	I1205 21:45:01.411367  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.411377  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:01.411384  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:01.411441  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:01.446560  358357 cri.go:89] found id: ""
	I1205 21:45:01.446599  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.446612  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:01.446620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:01.446687  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:01.480650  358357 cri.go:89] found id: ""
	I1205 21:45:01.480688  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.480702  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:01.480711  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:01.480788  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:01.515546  358357 cri.go:89] found id: ""
	I1205 21:45:01.515596  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.515609  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:01.515615  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:01.515680  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:01.550395  358357 cri.go:89] found id: ""
	I1205 21:45:01.550435  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.550449  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:01.550457  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:01.550619  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:01.588327  358357 cri.go:89] found id: ""
	I1205 21:45:01.588362  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.588375  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:01.588385  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:01.588456  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:01.622881  358357 cri.go:89] found id: ""
	I1205 21:45:01.622922  358357 logs.go:282] 0 containers: []
	W1205 21:45:01.622934  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:01.622948  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:01.622965  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:01.673702  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:01.673752  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:01.689462  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:01.689504  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:01.758509  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:01.758536  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:01.758550  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:01.839238  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:01.839294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.380325  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:04.393102  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:04.393192  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:04.428295  358357 cri.go:89] found id: ""
	I1205 21:45:04.428327  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.428339  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:04.428348  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:04.428455  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:04.463190  358357 cri.go:89] found id: ""
	I1205 21:45:04.463226  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.463238  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:04.463246  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:04.463316  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:04.496966  358357 cri.go:89] found id: ""
	I1205 21:45:04.497010  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.497022  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:04.497030  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:04.497097  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:04.531907  358357 cri.go:89] found id: ""
	I1205 21:45:04.531938  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.531950  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:04.531958  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:04.532031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:04.565760  358357 cri.go:89] found id: ""
	I1205 21:45:04.565793  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.565806  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:04.565815  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:04.565885  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:04.599720  358357 cri.go:89] found id: ""
	I1205 21:45:04.599756  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.599768  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:04.599774  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:04.599829  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:04.635208  358357 cri.go:89] found id: ""
	I1205 21:45:04.635241  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.635250  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:04.635257  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:04.635320  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:04.670121  358357 cri.go:89] found id: ""
	I1205 21:45:04.670153  358357 logs.go:282] 0 containers: []
	W1205 21:45:04.670162  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:04.670171  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:04.670183  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:04.708596  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:04.708641  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:04.765866  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:04.765919  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:04.780740  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:04.780772  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:04.856357  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:04.856386  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:04.856406  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:03.608315  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.107838  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:03.983888  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:05.990166  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:06.900029  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.900926  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:07.437028  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:07.450097  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:07.450168  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:07.485877  358357 cri.go:89] found id: ""
	I1205 21:45:07.485921  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.485934  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:07.485943  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:07.486007  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:07.520629  358357 cri.go:89] found id: ""
	I1205 21:45:07.520658  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.520666  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:07.520673  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:07.520732  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:07.555445  358357 cri.go:89] found id: ""
	I1205 21:45:07.555476  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.555487  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:07.555493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:07.555560  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:07.594479  358357 cri.go:89] found id: ""
	I1205 21:45:07.594513  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.594526  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:07.594533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:07.594594  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:07.629467  358357 cri.go:89] found id: ""
	I1205 21:45:07.629498  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.629509  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:07.629516  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:07.629572  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:07.666166  358357 cri.go:89] found id: ""
	I1205 21:45:07.666204  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.666218  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:07.666227  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:07.666303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:07.700440  358357 cri.go:89] found id: ""
	I1205 21:45:07.700472  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.700481  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:07.700490  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:07.700557  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:07.735094  358357 cri.go:89] found id: ""
	I1205 21:45:07.735130  358357 logs.go:282] 0 containers: []
	W1205 21:45:07.735152  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:07.735166  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:07.735184  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:07.788339  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:07.788386  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:07.802847  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:07.802879  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:07.873731  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:07.873755  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:07.873771  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:07.953369  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:07.953411  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:10.492613  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:10.506259  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:10.506374  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:10.540075  358357 cri.go:89] found id: ""
	I1205 21:45:10.540111  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.540120  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:10.540127  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:10.540216  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:08.108464  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.611075  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:08.483571  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.485086  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:11.399948  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:13.400364  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:10.577943  358357 cri.go:89] found id: ""
	I1205 21:45:10.577978  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.577991  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:10.577998  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:10.578073  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:10.614217  358357 cri.go:89] found id: ""
	I1205 21:45:10.614255  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.614268  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:10.614276  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:10.614346  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:10.649669  358357 cri.go:89] found id: ""
	I1205 21:45:10.649739  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.649751  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:10.649760  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:10.649830  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:10.687171  358357 cri.go:89] found id: ""
	I1205 21:45:10.687202  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.687211  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:10.687217  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:10.687307  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:10.722815  358357 cri.go:89] found id: ""
	I1205 21:45:10.722848  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.722858  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:10.722865  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:10.722934  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:10.759711  358357 cri.go:89] found id: ""
	I1205 21:45:10.759753  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.759767  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:10.759777  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:10.759849  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:10.797955  358357 cri.go:89] found id: ""
	I1205 21:45:10.797991  358357 logs.go:282] 0 containers: []
	W1205 21:45:10.798004  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:10.798017  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:10.798034  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:10.851920  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:10.851971  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:10.867691  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:10.867728  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:10.953866  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:10.953891  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:10.953928  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:11.033945  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:11.033990  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.574051  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:13.587371  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:13.587454  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:13.623492  358357 cri.go:89] found id: ""
	I1205 21:45:13.623524  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.623540  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:13.623546  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:13.623603  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:13.659547  358357 cri.go:89] found id: ""
	I1205 21:45:13.659588  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.659602  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:13.659610  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:13.659671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:13.694113  358357 cri.go:89] found id: ""
	I1205 21:45:13.694153  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.694166  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:13.694173  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:13.694233  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:13.729551  358357 cri.go:89] found id: ""
	I1205 21:45:13.729591  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.729604  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:13.729613  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:13.729684  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:13.763006  358357 cri.go:89] found id: ""
	I1205 21:45:13.763049  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.763062  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:13.763071  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:13.763134  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:13.802231  358357 cri.go:89] found id: ""
	I1205 21:45:13.802277  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.802292  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:13.802302  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:13.802384  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:13.840193  358357 cri.go:89] found id: ""
	I1205 21:45:13.840225  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.840240  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:13.840249  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:13.840335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:13.872625  358357 cri.go:89] found id: ""
	I1205 21:45:13.872653  358357 logs.go:282] 0 containers: []
	W1205 21:45:13.872663  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:13.872673  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:13.872687  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:13.922983  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:13.923028  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:13.936484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:13.936517  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:14.008295  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:14.008319  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:14.008334  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:14.095036  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:14.095091  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:13.110174  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.608405  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:12.986058  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.483570  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.484738  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:15.899141  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:17.899862  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.900993  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:16.637164  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:16.653070  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:16.653153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:16.687386  358357 cri.go:89] found id: ""
	I1205 21:45:16.687441  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.687456  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:16.687466  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:16.687545  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:16.722204  358357 cri.go:89] found id: ""
	I1205 21:45:16.722235  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.722244  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:16.722250  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:16.722323  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:16.757594  358357 cri.go:89] found id: ""
	I1205 21:45:16.757622  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.757631  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:16.757637  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:16.757691  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:16.790401  358357 cri.go:89] found id: ""
	I1205 21:45:16.790433  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.790442  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:16.790449  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:16.790502  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:16.827569  358357 cri.go:89] found id: ""
	I1205 21:45:16.827602  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.827615  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:16.827624  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:16.827701  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:16.860920  358357 cri.go:89] found id: ""
	I1205 21:45:16.860949  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.860965  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:16.860974  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:16.861038  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:16.895008  358357 cri.go:89] found id: ""
	I1205 21:45:16.895051  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.895063  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:16.895072  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:16.895151  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:16.931916  358357 cri.go:89] found id: ""
	I1205 21:45:16.931951  358357 logs.go:282] 0 containers: []
	W1205 21:45:16.931963  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:16.931975  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:16.931987  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:17.016108  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:17.016156  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:17.055353  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:17.055390  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:17.105859  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:17.105921  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:17.121357  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:17.121394  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:17.192584  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:19.693409  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:19.706431  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:19.706498  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:19.741212  358357 cri.go:89] found id: ""
	I1205 21:45:19.741249  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.741258  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:19.741268  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:19.741335  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:19.775906  358357 cri.go:89] found id: ""
	I1205 21:45:19.775945  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.775954  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:19.775960  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:19.776031  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:19.810789  358357 cri.go:89] found id: ""
	I1205 21:45:19.810822  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.810831  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:19.810839  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:19.810897  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:19.847669  358357 cri.go:89] found id: ""
	I1205 21:45:19.847701  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.847710  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:19.847717  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:19.847776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:19.881700  358357 cri.go:89] found id: ""
	I1205 21:45:19.881739  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.881752  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:19.881761  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:19.881838  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:19.919085  358357 cri.go:89] found id: ""
	I1205 21:45:19.919125  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.919140  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:19.919148  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:19.919226  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:19.955024  358357 cri.go:89] found id: ""
	I1205 21:45:19.955064  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.955078  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:19.955086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:19.955153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:19.991482  358357 cri.go:89] found id: ""
	I1205 21:45:19.991511  358357 logs.go:282] 0 containers: []
	W1205 21:45:19.991519  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:19.991530  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:19.991543  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:20.041980  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:20.042030  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:20.055580  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:20.055612  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:20.127194  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:20.127225  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:20.127242  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:20.207750  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:20.207797  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:18.108143  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:20.108435  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.109088  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:19.985203  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:21.986674  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.399189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:24.400311  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:22.749233  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:22.763720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:22.763796  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:22.798779  358357 cri.go:89] found id: ""
	I1205 21:45:22.798810  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.798820  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:22.798826  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:22.798906  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:22.837894  358357 cri.go:89] found id: ""
	I1205 21:45:22.837949  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.837964  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:22.837972  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:22.838026  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:22.872671  358357 cri.go:89] found id: ""
	I1205 21:45:22.872701  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.872713  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:22.872720  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:22.872785  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:22.906877  358357 cri.go:89] found id: ""
	I1205 21:45:22.906919  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.906929  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:22.906936  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:22.906988  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:22.941445  358357 cri.go:89] found id: ""
	I1205 21:45:22.941475  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.941486  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:22.941494  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:22.941565  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:22.976633  358357 cri.go:89] found id: ""
	I1205 21:45:22.976671  358357 logs.go:282] 0 containers: []
	W1205 21:45:22.976685  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:22.976694  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:22.976773  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:23.017034  358357 cri.go:89] found id: ""
	I1205 21:45:23.017077  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.017090  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:23.017096  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:23.017153  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:23.065098  358357 cri.go:89] found id: ""
	I1205 21:45:23.065136  358357 logs.go:282] 0 containers: []
	W1205 21:45:23.065149  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:23.065164  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:23.065180  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:23.145053  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:23.145104  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:23.159522  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:23.159557  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:23.228841  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:23.228865  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:23.228885  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:23.313351  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:23.313397  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:24.110151  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.607420  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:23.992037  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.484076  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:26.400904  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.899210  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:25.852034  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:25.865843  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:25.865944  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:25.899186  358357 cri.go:89] found id: ""
	I1205 21:45:25.899212  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.899222  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:25.899231  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:25.899298  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:25.938242  358357 cri.go:89] found id: ""
	I1205 21:45:25.938274  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.938286  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:25.938299  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:25.938371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:25.972322  358357 cri.go:89] found id: ""
	I1205 21:45:25.972355  358357 logs.go:282] 0 containers: []
	W1205 21:45:25.972368  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:25.972376  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:25.972446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:26.010638  358357 cri.go:89] found id: ""
	I1205 21:45:26.010667  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.010678  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:26.010686  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:26.010754  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:26.045415  358357 cri.go:89] found id: ""
	I1205 21:45:26.045450  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.045459  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:26.045466  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:26.045548  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:26.084635  358357 cri.go:89] found id: ""
	I1205 21:45:26.084673  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.084687  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:26.084696  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:26.084767  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:26.117417  358357 cri.go:89] found id: ""
	I1205 21:45:26.117455  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.117467  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:26.117475  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:26.117539  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:26.151857  358357 cri.go:89] found id: ""
	I1205 21:45:26.151893  358357 logs.go:282] 0 containers: []
	W1205 21:45:26.151905  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:26.151918  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:26.151936  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:26.238876  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:26.238926  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:26.280970  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:26.281006  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:26.336027  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:26.336083  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:26.350619  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:26.350654  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:26.418836  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:28.919046  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:28.933916  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:28.934002  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:28.971698  358357 cri.go:89] found id: ""
	I1205 21:45:28.971728  358357 logs.go:282] 0 containers: []
	W1205 21:45:28.971737  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:28.971744  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:28.971807  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:29.007385  358357 cri.go:89] found id: ""
	I1205 21:45:29.007423  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.007435  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:29.007443  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:29.007509  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:29.041087  358357 cri.go:89] found id: ""
	I1205 21:45:29.041130  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.041143  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:29.041151  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:29.041222  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:29.076926  358357 cri.go:89] found id: ""
	I1205 21:45:29.076965  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.076977  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:29.076986  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:29.077064  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:29.116376  358357 cri.go:89] found id: ""
	I1205 21:45:29.116419  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.116433  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:29.116443  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:29.116523  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:29.152495  358357 cri.go:89] found id: ""
	I1205 21:45:29.152530  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.152543  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:29.152552  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:29.152639  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:29.187647  358357 cri.go:89] found id: ""
	I1205 21:45:29.187681  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.187695  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:29.187704  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:29.187775  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:29.220410  358357 cri.go:89] found id: ""
	I1205 21:45:29.220452  358357 logs.go:282] 0 containers: []
	W1205 21:45:29.220469  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:29.220484  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:29.220513  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:29.287156  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:29.287184  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:29.287200  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:29.365592  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:29.365644  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:29.407876  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:29.407917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:29.462241  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:29.462294  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:28.607611  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.608683  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:28.484925  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.485979  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:30.899449  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.900189  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:34.900501  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:31.976691  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:31.991087  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:31.991172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:32.025743  358357 cri.go:89] found id: ""
	I1205 21:45:32.025781  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.025793  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:32.025801  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:32.025870  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:32.061790  358357 cri.go:89] found id: ""
	I1205 21:45:32.061828  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.061838  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:32.061844  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:32.061929  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:32.095437  358357 cri.go:89] found id: ""
	I1205 21:45:32.095474  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.095486  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:32.095493  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:32.095553  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:32.132203  358357 cri.go:89] found id: ""
	I1205 21:45:32.132242  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.132255  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:32.132264  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:32.132325  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:32.168529  358357 cri.go:89] found id: ""
	I1205 21:45:32.168566  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.168582  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:32.168590  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:32.168661  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:32.204816  358357 cri.go:89] found id: ""
	I1205 21:45:32.204851  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.204860  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:32.204885  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:32.204949  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:32.241661  358357 cri.go:89] found id: ""
	I1205 21:45:32.241696  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.241706  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:32.241712  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:32.241768  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:32.275458  358357 cri.go:89] found id: ""
	I1205 21:45:32.275491  358357 logs.go:282] 0 containers: []
	W1205 21:45:32.275500  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:32.275511  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:32.275524  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:32.329044  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:32.329098  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:32.343399  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:32.343432  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:32.420102  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:32.420135  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:32.420152  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:32.503061  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:32.503109  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:35.042457  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:35.056486  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:35.056564  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:35.091571  358357 cri.go:89] found id: ""
	I1205 21:45:35.091603  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.091613  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:35.091619  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:35.091686  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:35.130172  358357 cri.go:89] found id: ""
	I1205 21:45:35.130213  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.130225  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:35.130233  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:35.130303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:35.165723  358357 cri.go:89] found id: ""
	I1205 21:45:35.165754  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.165763  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:35.165770  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:35.165836  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:35.203599  358357 cri.go:89] found id: ""
	I1205 21:45:35.203632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.203646  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:35.203658  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:35.203721  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:35.237881  358357 cri.go:89] found id: ""
	I1205 21:45:35.237926  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.237938  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:35.237946  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:35.238015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:35.276506  358357 cri.go:89] found id: ""
	I1205 21:45:35.276543  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.276555  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:35.276563  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:35.276632  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:35.309600  358357 cri.go:89] found id: ""
	I1205 21:45:35.309632  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.309644  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:35.309652  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:35.309723  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:35.343062  358357 cri.go:89] found id: ""
	I1205 21:45:35.343097  358357 logs.go:282] 0 containers: []
	W1205 21:45:35.343110  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:35.343124  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:35.343146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:35.398686  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:35.398724  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:35.412910  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:35.412945  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:35.479542  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:35.479570  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:35.479587  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:35.556709  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:35.556754  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:33.107324  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.108931  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:32.988514  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:35.485301  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.399616  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.400552  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:38.095347  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:38.110086  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:38.110161  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:38.149114  358357 cri.go:89] found id: ""
	I1205 21:45:38.149149  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.149162  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:38.149172  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:38.149250  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:38.184110  358357 cri.go:89] found id: ""
	I1205 21:45:38.184141  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.184151  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:38.184157  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:38.184213  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:38.219569  358357 cri.go:89] found id: ""
	I1205 21:45:38.219608  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.219620  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:38.219628  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:38.219703  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:38.253096  358357 cri.go:89] found id: ""
	I1205 21:45:38.253133  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.253158  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:38.253167  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:38.253259  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:38.291558  358357 cri.go:89] found id: ""
	I1205 21:45:38.291591  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.291601  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:38.291608  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:38.291689  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:38.328236  358357 cri.go:89] found id: ""
	I1205 21:45:38.328269  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.328281  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:38.328288  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:38.328353  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:38.363263  358357 cri.go:89] found id: ""
	I1205 21:45:38.363295  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.363305  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:38.363311  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:38.363371  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:38.396544  358357 cri.go:89] found id: ""
	I1205 21:45:38.396577  358357 logs.go:282] 0 containers: []
	W1205 21:45:38.396587  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:38.396598  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:38.396611  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:38.438187  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:38.438226  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:38.492047  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:38.492086  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:38.505080  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:38.505123  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:38.574293  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:38.574320  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:38.574343  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:37.608407  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:39.609266  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.107313  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:37.984499  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:40.484539  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.898538  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:43.900097  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:41.155780  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:41.170875  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:41.170959  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:41.206755  358357 cri.go:89] found id: ""
	I1205 21:45:41.206793  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.206807  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:41.206824  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:41.206882  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:41.251021  358357 cri.go:89] found id: ""
	I1205 21:45:41.251060  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.251074  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:41.251082  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:41.251144  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:41.286805  358357 cri.go:89] found id: ""
	I1205 21:45:41.286836  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.286845  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:41.286852  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:41.286910  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:41.319489  358357 cri.go:89] found id: ""
	I1205 21:45:41.319526  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.319540  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:41.319549  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:41.319620  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:41.352769  358357 cri.go:89] found id: ""
	I1205 21:45:41.352807  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.352817  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:41.352823  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:41.352883  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:41.386830  358357 cri.go:89] found id: ""
	I1205 21:45:41.386869  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.386881  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:41.386889  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:41.386961  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:41.424824  358357 cri.go:89] found id: ""
	I1205 21:45:41.424866  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.424882  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:41.424892  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:41.424957  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:41.460273  358357 cri.go:89] found id: ""
	I1205 21:45:41.460307  358357 logs.go:282] 0 containers: []
	W1205 21:45:41.460316  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:41.460327  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:41.460341  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:41.539890  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:41.539951  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:41.579521  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:41.579570  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:41.630867  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:41.630917  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:41.644854  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:41.644892  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:41.719202  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.219965  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:44.234714  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:44.234824  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:44.269879  358357 cri.go:89] found id: ""
	I1205 21:45:44.269931  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.269945  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:44.269954  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:44.270023  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:44.302994  358357 cri.go:89] found id: ""
	I1205 21:45:44.303034  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.303047  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:44.303056  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:44.303126  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:44.337575  358357 cri.go:89] found id: ""
	I1205 21:45:44.337604  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.337613  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:44.337620  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:44.337674  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:44.374554  358357 cri.go:89] found id: ""
	I1205 21:45:44.374591  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.374600  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:44.374605  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:44.374671  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:44.409965  358357 cri.go:89] found id: ""
	I1205 21:45:44.410001  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.410013  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:44.410021  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:44.410090  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:44.446583  358357 cri.go:89] found id: ""
	I1205 21:45:44.446620  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.446633  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:44.446641  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:44.446705  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:44.481187  358357 cri.go:89] found id: ""
	I1205 21:45:44.481223  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.481239  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:44.481248  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:44.481315  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:44.515729  358357 cri.go:89] found id: ""
	I1205 21:45:44.515761  358357 logs.go:282] 0 containers: []
	W1205 21:45:44.515770  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:44.515781  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:44.515799  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:44.567266  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:44.567314  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:44.581186  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:44.581219  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:44.655377  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:44.655404  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:44.655420  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:44.741789  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:44.741835  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:44.108015  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:46.109878  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:42.987144  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.484635  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:45.900943  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:48.399795  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.283721  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:47.296771  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:47.296839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:47.330892  358357 cri.go:89] found id: ""
	I1205 21:45:47.330927  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.330941  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:47.330949  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:47.331015  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:47.362771  358357 cri.go:89] found id: ""
	I1205 21:45:47.362805  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.362818  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:47.362826  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:47.362898  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:47.397052  358357 cri.go:89] found id: ""
	I1205 21:45:47.397082  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.397092  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:47.397100  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:47.397172  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:47.430155  358357 cri.go:89] found id: ""
	I1205 21:45:47.430184  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.430193  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:47.430199  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:47.430255  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:47.465183  358357 cri.go:89] found id: ""
	I1205 21:45:47.465230  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.465244  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:47.465252  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:47.465327  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:47.505432  358357 cri.go:89] found id: ""
	I1205 21:45:47.505467  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.505479  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:47.505487  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:47.505583  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:47.538813  358357 cri.go:89] found id: ""
	I1205 21:45:47.538841  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.538851  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:47.538859  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:47.538913  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:47.577554  358357 cri.go:89] found id: ""
	I1205 21:45:47.577589  358357 logs.go:282] 0 containers: []
	W1205 21:45:47.577598  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:47.577610  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:47.577623  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:47.633652  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:47.633700  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:47.648242  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:47.648291  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:47.723335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:47.723369  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:47.723387  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:47.806404  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:47.806454  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.348134  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:50.361273  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:50.361367  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:50.393942  358357 cri.go:89] found id: ""
	I1205 21:45:50.393972  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.393980  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:50.393986  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:50.394054  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:50.430835  358357 cri.go:89] found id: ""
	I1205 21:45:50.430873  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.430884  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:50.430892  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:50.430963  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:50.465245  358357 cri.go:89] found id: ""
	I1205 21:45:50.465303  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.465316  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:50.465326  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:50.465397  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:50.498370  358357 cri.go:89] found id: ""
	I1205 21:45:50.498396  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.498406  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:50.498414  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:50.498480  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:50.530194  358357 cri.go:89] found id: ""
	I1205 21:45:50.530233  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.530247  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:50.530262  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:50.530383  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:48.607163  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.608353  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:47.984724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.483783  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.484838  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:50.400860  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:52.898957  357912 pod_ready.go:103] pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:54.399893  357912 pod_ready.go:82] duration metric: took 4m0.00693537s for pod "metrics-server-6867b74b74-xb867" in "kube-system" namespace to be "Ready" ...
	E1205 21:45:54.399922  357912 pod_ready.go:67] WaitExtra: waitPodCondition: context deadline exceeded
	I1205 21:45:54.399931  357912 pod_ready.go:39] duration metric: took 4m6.388856223s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:45:54.399958  357912 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:45:54.399994  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:54.400045  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:54.436650  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:54.436679  357912 cri.go:89] found id: ""
	I1205 21:45:54.436690  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:54.436751  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.440795  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:54.440866  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:54.475714  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:54.475739  357912 cri.go:89] found id: ""
	I1205 21:45:54.475749  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:54.475879  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.480165  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:54.480255  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:54.516427  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:54.516459  357912 cri.go:89] found id: ""
	I1205 21:45:54.516468  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:54.516529  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.520486  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:54.520548  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:54.555687  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:54.555719  357912 cri.go:89] found id: ""
	I1205 21:45:54.555727  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:54.555789  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.559827  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:54.559916  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:54.596640  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:54.596665  357912 cri.go:89] found id: ""
	I1205 21:45:54.596675  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:54.596753  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.601144  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:54.601229  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:54.639374  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:54.639408  357912 cri.go:89] found id: ""
	I1205 21:45:54.639419  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:54.639495  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.643665  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:54.643754  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:54.678252  357912 cri.go:89] found id: ""
	I1205 21:45:54.678286  357912 logs.go:282] 0 containers: []
	W1205 21:45:54.678297  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:54.678306  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:54.678373  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:54.711874  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:54.711908  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:54.711915  357912 cri.go:89] found id: ""
	I1205 21:45:54.711925  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:54.711994  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.716164  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:54.720244  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:54.720274  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:55.258307  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:55.258372  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:55.300132  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:55.300198  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:55.315703  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:55.315745  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:50.567181  358357 cri.go:89] found id: ""
	I1205 21:45:50.567216  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.567229  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:50.567237  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:50.567329  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:50.600345  358357 cri.go:89] found id: ""
	I1205 21:45:50.600376  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.600385  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:50.600392  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:50.600446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:50.635072  358357 cri.go:89] found id: ""
	I1205 21:45:50.635108  358357 logs.go:282] 0 containers: []
	W1205 21:45:50.635121  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:50.635133  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:50.635146  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:50.702977  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:50.703001  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:50.703020  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:50.785033  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:50.785077  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:50.825173  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:50.825214  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:50.876664  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:50.876723  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.391161  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:53.405635  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:53.405713  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:53.440319  358357 cri.go:89] found id: ""
	I1205 21:45:53.440358  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.440371  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:45:53.440380  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:53.440446  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:53.480169  358357 cri.go:89] found id: ""
	I1205 21:45:53.480195  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.480204  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:45:53.480210  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:53.480355  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:53.515202  358357 cri.go:89] found id: ""
	I1205 21:45:53.515233  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.515315  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:45:53.515332  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:53.515401  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:53.552351  358357 cri.go:89] found id: ""
	I1205 21:45:53.552388  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.552402  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:45:53.552411  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:53.552481  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:53.590669  358357 cri.go:89] found id: ""
	I1205 21:45:53.590705  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.590717  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:45:53.590726  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:53.590791  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:53.627977  358357 cri.go:89] found id: ""
	I1205 21:45:53.628015  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.628029  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:45:53.628037  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:53.628112  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:53.662711  358357 cri.go:89] found id: ""
	I1205 21:45:53.662745  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.662761  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:53.662769  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:45:53.662839  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:45:53.696925  358357 cri.go:89] found id: ""
	I1205 21:45:53.696965  358357 logs.go:282] 0 containers: []
	W1205 21:45:53.696976  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:45:53.696988  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:53.697012  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:53.750924  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:53.750970  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:53.763965  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:53.763997  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:45:53.832335  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:45:53.832361  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:53.832377  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:53.915961  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:45:53.916011  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:53.107436  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:55.107826  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.108330  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.456367  358357 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:56.469503  358357 kubeadm.go:597] duration metric: took 4m2.564660353s to restartPrimaryControlPlane
	W1205 21:45:56.469630  358357 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:45:56.469672  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:45:56.934079  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:45:56.948092  358357 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:45:56.958166  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:45:56.967591  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:45:56.967613  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:45:56.967660  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:45:56.977085  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:45:56.977152  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:45:56.987395  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:45:56.996675  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:45:56.996764  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:45:57.010323  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.020441  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:45:57.020514  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:45:57.032114  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:45:57.042012  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:45:57.042095  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:45:57.051763  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:45:57.126716  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:45:57.126840  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:45:57.265491  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:45:57.265694  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:45:57.265856  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:45:57.450377  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:45:54.486224  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:56.984442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:57.452240  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:45:57.452361  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:45:57.452458  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:45:57.452625  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:45:57.452712  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:45:57.452824  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:45:57.452913  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:45:57.453084  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:45:57.453179  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:45:57.453276  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:45:57.453343  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:45:57.453377  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:45:57.453430  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:45:57.872211  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:45:58.085006  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:45:58.165194  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:45:58.323597  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:45:58.338715  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:45:58.340504  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:45:58.340604  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:45:58.479241  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:45:55.429307  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:55.429346  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:55.476044  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:55.476085  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:55.512956  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:55.513004  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:55.570534  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:55.570583  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:55.608099  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:55.608141  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:55.677021  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:55.677069  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:55.727298  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:55.727347  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:55.764637  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:55.764675  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:55.803471  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:55.803513  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.347406  357912 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:45:58.362574  357912 api_server.go:72] duration metric: took 4m18.075855986s to wait for apiserver process to appear ...
	I1205 21:45:58.362609  357912 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:45:58.362658  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:45:58.362724  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:45:58.407526  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.407559  357912 cri.go:89] found id: ""
	I1205 21:45:58.407571  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:45:58.407642  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.412133  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:45:58.412221  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:45:58.454243  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.454280  357912 cri.go:89] found id: ""
	I1205 21:45:58.454292  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:45:58.454381  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.458950  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:45:58.459038  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:45:58.502502  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:58.502527  357912 cri.go:89] found id: ""
	I1205 21:45:58.502535  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:45:58.502595  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.506926  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:45:58.507012  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:45:58.548550  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:58.548587  357912 cri.go:89] found id: ""
	I1205 21:45:58.548600  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:45:58.548670  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.553797  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:45:58.553886  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:45:58.595353  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:58.595389  357912 cri.go:89] found id: ""
	I1205 21:45:58.595401  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:45:58.595471  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.599759  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:45:58.599856  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:45:58.645942  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:58.645979  357912 cri.go:89] found id: ""
	I1205 21:45:58.645991  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:45:58.646059  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.650416  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:45:58.650502  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:45:58.688459  357912 cri.go:89] found id: ""
	I1205 21:45:58.688491  357912 logs.go:282] 0 containers: []
	W1205 21:45:58.688504  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:45:58.688520  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:45:58.688593  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:45:58.723421  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.723454  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.723461  357912 cri.go:89] found id: ""
	I1205 21:45:58.723471  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:45:58.723539  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.728441  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:45:58.732583  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:45:58.732610  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:45:58.843724  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:45:58.843765  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:45:58.887836  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:45:58.887879  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:45:58.932909  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:45:58.932951  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:45:58.967559  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:45:58.967613  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:45:59.006895  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:45:59.006939  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:45:59.446512  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:45:59.446573  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:45:59.518754  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:45:59.518807  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:45:59.533621  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:45:59.533656  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:45:59.569589  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:45:59.569630  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:45:59.606973  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:45:59.607028  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:45:59.651826  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:45:59.651862  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:45:59.712309  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:45:59.712353  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:45:58.480831  358357 out.go:235]   - Booting up control plane ...
	I1205 21:45:58.480991  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:45:58.495549  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:45:58.497073  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:45:58.498469  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:45:58.501265  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:45:59.112080  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.608016  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:45:58.985164  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:01.485724  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:02.247604  357912 api_server.go:253] Checking apiserver healthz at https://192.168.39.106:8444/healthz ...
	I1205 21:46:02.253579  357912 api_server.go:279] https://192.168.39.106:8444/healthz returned 200:
	ok
	I1205 21:46:02.254645  357912 api_server.go:141] control plane version: v1.31.2
	I1205 21:46:02.254674  357912 api_server.go:131] duration metric: took 3.892057076s to wait for apiserver health ...
	I1205 21:46:02.254685  357912 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:46:02.254718  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:46:02.254784  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:46:02.292102  357912 cri.go:89] found id: "079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.292133  357912 cri.go:89] found id: ""
	I1205 21:46:02.292143  357912 logs.go:282] 1 containers: [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828]
	I1205 21:46:02.292210  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.297421  357912 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:46:02.297522  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:46:02.333140  357912 cri.go:89] found id: "035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.333172  357912 cri.go:89] found id: ""
	I1205 21:46:02.333184  357912 logs.go:282] 1 containers: [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7]
	I1205 21:46:02.333258  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.337789  357912 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:46:02.337870  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:46:02.374302  357912 cri.go:89] found id: "d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.374332  357912 cri.go:89] found id: ""
	I1205 21:46:02.374344  357912 logs.go:282] 1 containers: [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6]
	I1205 21:46:02.374411  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.378635  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:46:02.378704  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:46:02.415899  357912 cri.go:89] found id: "c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:02.415932  357912 cri.go:89] found id: ""
	I1205 21:46:02.415944  357912 logs.go:282] 1 containers: [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5]
	I1205 21:46:02.416010  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.421097  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:46:02.421180  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:46:02.457483  357912 cri.go:89] found id: "963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:02.457514  357912 cri.go:89] found id: ""
	I1205 21:46:02.457534  357912 logs.go:282] 1 containers: [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d]
	I1205 21:46:02.457606  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.462215  357912 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:46:02.462307  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:46:02.499576  357912 cri.go:89] found id: "807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.499603  357912 cri.go:89] found id: ""
	I1205 21:46:02.499612  357912 logs.go:282] 1 containers: [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a]
	I1205 21:46:02.499681  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.504262  357912 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:46:02.504341  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:46:02.539612  357912 cri.go:89] found id: ""
	I1205 21:46:02.539649  357912 logs.go:282] 0 containers: []
	W1205 21:46:02.539661  357912 logs.go:284] No container was found matching "kindnet"
	I1205 21:46:02.539668  357912 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1205 21:46:02.539740  357912 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1205 21:46:02.576436  357912 cri.go:89] found id: "7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.576464  357912 cri.go:89] found id: "37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.576468  357912 cri.go:89] found id: ""
	I1205 21:46:02.576477  357912 logs.go:282] 2 containers: [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa]
	I1205 21:46:02.576546  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.580650  357912 ssh_runner.go:195] Run: which crictl
	I1205 21:46:02.584677  357912 logs.go:123] Gathering logs for kube-controller-manager [807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a] ...
	I1205 21:46:02.584717  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 807e6454204d49b7b350982e85c10f991633251845ffd44b59b853204089594a"
	I1205 21:46:02.638712  357912 logs.go:123] Gathering logs for storage-provisioner [7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4] ...
	I1205 21:46:02.638753  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7befce79ea8341c670d705cad793d24673d4069c58c57d13a4829f782f579fd4"
	I1205 21:46:02.677464  357912 logs.go:123] Gathering logs for storage-provisioner [37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa] ...
	I1205 21:46:02.677501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 37f783b4a3402289128acbdb75a332cf83c56de645857c975785f41a48c503aa"
	I1205 21:46:02.718014  357912 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:46:02.718049  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1205 21:46:02.828314  357912 logs.go:123] Gathering logs for kube-apiserver [079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828] ...
	I1205 21:46:02.828360  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 079fc145d351580076c0cada553c685c4abda058b3c93e361a3aabac9eeaa828"
	I1205 21:46:02.881584  357912 logs.go:123] Gathering logs for etcd [035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7] ...
	I1205 21:46:02.881629  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 035df011d5399ca8aae3553f5fa9edcb145822ece94cff48ada1cc95637227c7"
	I1205 21:46:02.928082  357912 logs.go:123] Gathering logs for coredns [d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6] ...
	I1205 21:46:02.928120  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d4ac290ffeedd4d798e8b7e952172bfbc188cdd4a5dfd208af15480b626795b6"
	I1205 21:46:02.963962  357912 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:46:02.963997  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:46:03.347451  357912 logs.go:123] Gathering logs for container status ...
	I1205 21:46:03.347501  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:46:03.389942  357912 logs.go:123] Gathering logs for kubelet ...
	I1205 21:46:03.389991  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:46:03.459121  357912 logs.go:123] Gathering logs for dmesg ...
	I1205 21:46:03.459168  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1205 21:46:03.480556  357912 logs.go:123] Gathering logs for kube-scheduler [c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5] ...
	I1205 21:46:03.480592  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 c0ddf1d7f97dae7db7f45c94a4b14ca1109961e3fd3102272084ddb8fb4077f5"
	I1205 21:46:03.519661  357912 logs.go:123] Gathering logs for kube-proxy [963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d] ...
	I1205 21:46:03.519699  357912 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 963fc5fe0f7ee3eaea630d0a33e8619462bdad41ed336c8580184239f4bc848d"
	I1205 21:46:06.063263  357912 system_pods.go:59] 8 kube-system pods found
	I1205 21:46:06.063309  357912 system_pods.go:61] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.063317  357912 system_pods.go:61] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.063327  357912 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.063334  357912 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.063338  357912 system_pods.go:61] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.063344  357912 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.063352  357912 system_pods.go:61] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.063358  357912 system_pods.go:61] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.063369  357912 system_pods.go:74] duration metric: took 3.808675994s to wait for pod list to return data ...
	I1205 21:46:06.063380  357912 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:46:06.066095  357912 default_sa.go:45] found service account: "default"
	I1205 21:46:06.066120  357912 default_sa.go:55] duration metric: took 2.733262ms for default service account to be created ...
	I1205 21:46:06.066128  357912 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:46:06.070476  357912 system_pods.go:86] 8 kube-system pods found
	I1205 21:46:06.070503  357912 system_pods.go:89] "coredns-7c65d6cfc9-mll8z" [fcea0826-1093-43ce-87d0-26fb19447609] Running
	I1205 21:46:06.070509  357912 system_pods.go:89] "etcd-default-k8s-diff-port-751353" [dbade41d-2f53-45d5-aeb6-9b7df2565ef6] Running
	I1205 21:46:06.070513  357912 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-751353" [b94221d1-ab02-4406-9f34-156813ddfd4b] Running
	I1205 21:46:06.070516  357912 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-751353" [70669ec2-cddf-4ec9-a3d6-ee7cab8cd75c] Running
	I1205 21:46:06.070520  357912 system_pods.go:89] "kube-proxy-b4ws4" [d2620959-e3e4-4575-af26-243207a83495] Running
	I1205 21:46:06.070523  357912 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-751353" [4a519b64-0ddd-425c-a8d6-de52e3508f80] Running
	I1205 21:46:06.070531  357912 system_pods.go:89] "metrics-server-6867b74b74-xb867" [6ac4cc31-ed56-44b9-9a83-76296436bc34] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:46:06.070536  357912 system_pods.go:89] "storage-provisioner" [aabf9cc9-c416-4db2-97b0-23533dd76c28] Running
	I1205 21:46:06.070544  357912 system_pods.go:126] duration metric: took 4.410448ms to wait for k8s-apps to be running ...
	I1205 21:46:06.070553  357912 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:46:06.070614  357912 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:06.085740  357912 system_svc.go:56] duration metric: took 15.17952ms WaitForService to wait for kubelet
	I1205 21:46:06.085771  357912 kubeadm.go:582] duration metric: took 4m25.799061755s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:46:06.085796  357912 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:46:06.088851  357912 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:46:06.088873  357912 node_conditions.go:123] node cpu capacity is 2
	I1205 21:46:06.088887  357912 node_conditions.go:105] duration metric: took 3.087287ms to run NodePressure ...
	I1205 21:46:06.088900  357912 start.go:241] waiting for startup goroutines ...
	I1205 21:46:06.088906  357912 start.go:246] waiting for cluster config update ...
	I1205 21:46:06.088919  357912 start.go:255] writing updated cluster config ...
	I1205 21:46:06.089253  357912 ssh_runner.go:195] Run: rm -f paused
	I1205 21:46:06.141619  357912 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:46:06.143538  357912 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-751353" cluster and "default" namespace by default
	I1205 21:46:04.108628  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.108805  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:03.987070  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:06.484360  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.608534  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:11.107516  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:08.485291  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:10.984391  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.108040  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.607861  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:13.484442  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:15.484501  357831 pod_ready.go:103] pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:17.478619  357831 pod_ready.go:82] duration metric: took 4m0.00079651s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:17.478648  357831 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-7xm6l" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:17.478669  357831 pod_ready.go:39] duration metric: took 4m12.054745084s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:17.478700  357831 kubeadm.go:597] duration metric: took 4m55.174067413s to restartPrimaryControlPlane
	W1205 21:46:17.478757  357831 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:17.478794  357831 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:17.608486  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:20.107816  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:22.108413  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:24.608157  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:27.109329  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:29.608127  357296 pod_ready.go:103] pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace has status "Ready":"False"
	I1205 21:46:30.101360  357296 pod_ready.go:82] duration metric: took 4m0.000121506s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" ...
	E1205 21:46:30.101395  357296 pod_ready.go:67] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-6867b74b74-dggmv" in "kube-system" namespace to be "Ready" (will not retry!)
	I1205 21:46:30.101417  357296 pod_ready.go:39] duration metric: took 4m9.523665884s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:30.101449  357296 kubeadm.go:597] duration metric: took 4m18.570527556s to restartPrimaryControlPlane
	W1205 21:46:30.101510  357296 out.go:270] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1205 21:46:30.101539  357296 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:46:38.501720  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:46:38.502250  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:38.502440  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:43.619373  357831 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.140547336s)
	I1205 21:46:43.619459  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:43.641806  357831 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:43.655964  357831 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:43.669647  357831 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:43.669670  357831 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:43.669718  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:43.681685  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:43.681774  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:43.700247  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:43.718376  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:43.718464  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:43.736153  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.746027  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:43.746101  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:43.756294  357831 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:43.765644  357831 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:43.765723  357831 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:43.776011  357831 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:43.821666  357831 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:43.821773  357831 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:43.915091  357831 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:43.915226  357831 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:43.915356  357831 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:43.923305  357831 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:43.924984  357831 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:43.925071  357831 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:43.925133  357831 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:43.925211  357831 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:43.925298  357831 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:43.925410  357831 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:43.925490  357831 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:43.925585  357831 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:43.925687  357831 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:43.925806  357831 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:43.925915  357831 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:43.925978  357831 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:43.926051  357831 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:44.035421  357831 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:44.451260  357831 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:44.816773  357831 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:44.923048  357831 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:45.045983  357831 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:45.046651  357831 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:45.049375  357831 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:43.502826  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:43.503045  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:45.051123  357831 out.go:235]   - Booting up control plane ...
	I1205 21:46:45.051270  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:45.051407  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:45.051498  357831 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:45.069011  357831 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:45.075630  357831 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:45.075703  357831 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:45.207048  357831 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:45.207215  357831 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:46.208858  357831 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001818315s
	I1205 21:46:46.208985  357831 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:50.711424  357831 kubeadm.go:310] [api-check] The API server is healthy after 4.502481614s
	I1205 21:46:50.725080  357831 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:46:50.745839  357831 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:46:50.774902  357831 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:46:50.775169  357831 kubeadm.go:310] [mark-control-plane] Marking the node no-preload-500648 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:46:50.795250  357831 kubeadm.go:310] [bootstrap-token] Using token: o2vi7b.yhkmrcpvplzqpha9
	I1205 21:46:50.796742  357831 out.go:235]   - Configuring RBAC rules ...
	I1205 21:46:50.796960  357831 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:46:50.804445  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:46:50.818218  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:46:50.823638  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:46:50.827946  357831 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:46:50.832291  357831 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:46:51.119777  357831 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:46:51.563750  357831 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:46:52.124884  357831 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:46:52.124922  357831 kubeadm.go:310] 
	I1205 21:46:52.125000  357831 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:46:52.125010  357831 kubeadm.go:310] 
	I1205 21:46:52.125089  357831 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:46:52.125099  357831 kubeadm.go:310] 
	I1205 21:46:52.125132  357831 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:46:52.125208  357831 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:46:52.125321  357831 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:46:52.125343  357831 kubeadm.go:310] 
	I1205 21:46:52.125447  357831 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:46:52.125475  357831 kubeadm.go:310] 
	I1205 21:46:52.125547  357831 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:46:52.125559  357831 kubeadm.go:310] 
	I1205 21:46:52.125641  357831 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:46:52.125734  357831 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:46:52.125806  357831 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:46:52.125814  357831 kubeadm.go:310] 
	I1205 21:46:52.125887  357831 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:46:52.126025  357831 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:46:52.126039  357831 kubeadm.go:310] 
	I1205 21:46:52.126132  357831 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126230  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:46:52.126254  357831 kubeadm.go:310] 	--control-plane 
	I1205 21:46:52.126269  357831 kubeadm.go:310] 
	I1205 21:46:52.126406  357831 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:46:52.126437  357831 kubeadm.go:310] 
	I1205 21:46:52.126524  357831 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o2vi7b.yhkmrcpvplzqpha9 \
	I1205 21:46:52.126615  357831 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:46:52.127299  357831 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:46:52.127360  357831 cni.go:84] Creating CNI manager for ""
	I1205 21:46:52.127380  357831 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:46:52.130084  357831 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:46:52.131504  357831 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:46:52.142489  357831 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:46:52.165689  357831 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:46:52.165813  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.165817  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-500648 minikube.k8s.io/updated_at=2024_12_05T21_46_52_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=no-preload-500648 minikube.k8s.io/primary=true
	I1205 21:46:52.194084  357831 ops.go:34] apiserver oom_adj: -16
	I1205 21:46:52.342692  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:52.843802  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.503222  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:46:53.503418  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:46:53.342932  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:53.843712  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.343785  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:54.843090  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.342889  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:55.843250  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.343676  357831 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:46:56.452001  357831 kubeadm.go:1113] duration metric: took 4.286277257s to wait for elevateKubeSystemPrivileges
	I1205 21:46:56.452048  357831 kubeadm.go:394] duration metric: took 5m34.195010212s to StartCluster
	I1205 21:46:56.452076  357831 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.452204  357831 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:46:56.454793  357831 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:46:56.455206  357831 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.50.141 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:46:56.455333  357831 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:46:56.455476  357831 addons.go:69] Setting storage-provisioner=true in profile "no-preload-500648"
	I1205 21:46:56.455480  357831 config.go:182] Loaded profile config "no-preload-500648": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:46:56.455502  357831 addons.go:234] Setting addon storage-provisioner=true in "no-preload-500648"
	W1205 21:46:56.455514  357831 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:46:56.455528  357831 addons.go:69] Setting default-storageclass=true in profile "no-preload-500648"
	I1205 21:46:56.455559  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455544  357831 addons.go:69] Setting metrics-server=true in profile "no-preload-500648"
	I1205 21:46:56.455585  357831 addons.go:234] Setting addon metrics-server=true in "no-preload-500648"
	W1205 21:46:56.455599  357831 addons.go:243] addon metrics-server should already be in state true
	I1205 21:46:56.455646  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.455564  357831 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-500648"
	I1205 21:46:56.456041  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456085  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456090  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456129  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456139  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.456201  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.456945  357831 out.go:177] * Verifying Kubernetes components...
	I1205 21:46:56.462035  357831 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:46:56.474102  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35145
	I1205 21:46:56.474771  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.475414  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.475442  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.475459  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36489
	I1205 21:46:56.475974  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.476137  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.476569  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.476612  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.476693  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.476706  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.477058  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.477252  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.477388  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36469
	I1205 21:46:56.477924  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.478472  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.478498  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.478910  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.479488  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.479537  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.481716  357831 addons.go:234] Setting addon default-storageclass=true in "no-preload-500648"
	W1205 21:46:56.481735  357831 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:46:56.481768  357831 host.go:66] Checking if "no-preload-500648" exists ...
	I1205 21:46:56.482186  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.482241  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.497613  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43261
	I1205 21:46:56.499026  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.500026  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.500053  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.501992  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.502774  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.503014  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37339
	I1205 21:46:56.503560  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.504199  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.504220  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.504720  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.504930  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.506107  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.506961  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.508481  357831 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:46:56.509688  357831 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:46:56.428849  357296 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (26.327265456s)
	I1205 21:46:56.428959  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:46:56.445569  357296 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1205 21:46:56.458431  357296 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:46:56.478171  357296 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:46:56.478202  357296 kubeadm.go:157] found existing configuration files:
	
	I1205 21:46:56.478252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:46:56.492246  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:46:56.492317  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:46:56.511252  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:46:56.529865  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:46:56.529993  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:46:56.542465  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.554125  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:46:56.554201  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:46:56.564805  357296 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:46:56.574418  357296 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:46:56.574509  357296 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:46:56.587684  357296 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:46:56.643896  357296 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1205 21:46:56.643994  357296 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:46:56.758721  357296 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:46:56.758878  357296 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:46:56.759002  357296 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1205 21:46:56.770017  357296 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:46:56.771897  357296 out.go:235]   - Generating certificates and keys ...
	I1205 21:46:56.772014  357296 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:46:56.772097  357296 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:46:56.772211  357296 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:46:56.772312  357296 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:46:56.772411  357296 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:46:56.772485  357296 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:46:56.772569  357296 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:46:56.772701  357296 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:46:56.772839  357296 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:46:56.772978  357296 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:46:56.773044  357296 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:46:56.773122  357296 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:46:57.097605  357296 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:46:57.252307  357296 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1205 21:46:56.510816  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I1205 21:46:56.511503  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.511959  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.511975  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.512788  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.513412  357831 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:46:56.513449  357831 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:46:56.514695  357831 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.514710  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:46:56.514728  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.515562  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:46:56.515580  357831 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:46:56.515606  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.519790  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.520365  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521033  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.521059  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.521366  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.521709  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.522251  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.522340  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.522357  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.522563  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.523091  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.523374  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.523546  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.523751  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.535368  357831 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42975
	I1205 21:46:56.535890  357831 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:46:56.536613  357831 main.go:141] libmachine: Using API Version  1
	I1205 21:46:56.536640  357831 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:46:56.537046  357831 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:46:56.537264  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetState
	I1205 21:46:56.539328  357831 main.go:141] libmachine: (no-preload-500648) Calling .DriverName
	I1205 21:46:56.539566  357831 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.539582  357831 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:46:56.539601  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHHostname
	I1205 21:46:56.543910  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544687  357831 main.go:141] libmachine: (no-preload-500648) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:98:f0:c5", ip: ""} in network mk-no-preload-500648: {Iface:virbr2 ExpiryTime:2024-12-05 22:40:55 +0000 UTC Type:0 Mac:52:54:00:98:f0:c5 Iaid: IPaddr:192.168.50.141 Prefix:24 Hostname:no-preload-500648 Clientid:01:52:54:00:98:f0:c5}
	I1205 21:46:56.544721  357831 main.go:141] libmachine: (no-preload-500648) DBG | domain no-preload-500648 has defined IP address 192.168.50.141 and MAC address 52:54:00:98:f0:c5 in network mk-no-preload-500648
	I1205 21:46:56.544779  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHPort
	I1205 21:46:56.544991  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHKeyPath
	I1205 21:46:56.545101  357831 main.go:141] libmachine: (no-preload-500648) Calling .GetSSHUsername
	I1205 21:46:56.545227  357831 sshutil.go:53] new ssh client: &{IP:192.168.50.141 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/no-preload-500648/id_rsa Username:docker}
	I1205 21:46:56.703959  357831 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:46:56.727549  357831 node_ready.go:35] waiting up to 6m0s for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782087  357831 node_ready.go:49] node "no-preload-500648" has status "Ready":"True"
	I1205 21:46:56.782124  357831 node_ready.go:38] duration metric: took 54.531096ms for node "no-preload-500648" to be "Ready" ...
	I1205 21:46:56.782138  357831 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:46:56.826592  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:46:56.826630  357831 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:46:56.828646  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:46:56.829857  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:46:56.866720  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:46:56.903318  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:46:56.903355  357831 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:46:57.007535  357831 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.007573  357831 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:46:57.100723  357831 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:46:57.134239  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134279  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.134710  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.134711  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.134770  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.134785  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.134793  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.135032  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.135053  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.146695  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.146730  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.147103  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.147154  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625311  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625353  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625696  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.625755  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.625793  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.625805  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.625698  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.626115  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.626144  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907526  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907557  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.907895  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.907911  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.907920  357831 main.go:141] libmachine: Making call to close driver server
	I1205 21:46:57.907927  357831 main.go:141] libmachine: (no-preload-500648) Calling .Close
	I1205 21:46:57.908170  357831 main.go:141] libmachine: (no-preload-500648) DBG | Closing plugin on server side
	I1205 21:46:57.908202  357831 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:46:57.908235  357831 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:46:57.908260  357831 addons.go:475] Verifying addon metrics-server=true in "no-preload-500648"
	I1205 21:46:57.909815  357831 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I1205 21:46:57.605825  357296 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:46:57.683035  357296 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:46:57.977494  357296 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:46:57.977852  357296 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:46:57.980442  357296 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:46:57.982293  357296 out.go:235]   - Booting up control plane ...
	I1205 21:46:57.982435  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:46:57.982555  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:46:57.982745  357296 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:46:58.002995  357296 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:46:58.009140  357296 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:46:58.009256  357296 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:46:58.138869  357296 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1205 21:46:58.139045  357296 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1205 21:46:58.639981  357296 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.388842ms
	I1205 21:46:58.640142  357296 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1205 21:46:57.911073  357831 addons.go:510] duration metric: took 1.455746374s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I1205 21:46:58.838170  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:00.337951  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:00.337987  357831 pod_ready.go:82] duration metric: took 3.508095495s for pod "coredns-7c65d6cfc9-6gw87" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:00.338002  357831 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:02.345422  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:03.641918  357296 kubeadm.go:310] [api-check] The API server is healthy after 5.001977261s
	I1205 21:47:03.660781  357296 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1205 21:47:03.675811  357296 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1205 21:47:03.729810  357296 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1205 21:47:03.730021  357296 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-425614 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1205 21:47:03.746963  357296 kubeadm.go:310] [bootstrap-token] Using token: b8c9g8.26tr6ftn8ovs2kwi
	I1205 21:47:03.748213  357296 out.go:235]   - Configuring RBAC rules ...
	I1205 21:47:03.748373  357296 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1205 21:47:03.755934  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1205 21:47:03.770479  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1205 21:47:03.775661  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1205 21:47:03.783490  357296 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1205 21:47:03.789562  357296 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1205 21:47:04.049714  357296 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1205 21:47:04.486306  357296 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1205 21:47:05.053561  357296 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1205 21:47:05.053590  357296 kubeadm.go:310] 
	I1205 21:47:05.053708  357296 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1205 21:47:05.053738  357296 kubeadm.go:310] 
	I1205 21:47:05.053846  357296 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1205 21:47:05.053868  357296 kubeadm.go:310] 
	I1205 21:47:05.053915  357296 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1205 21:47:05.053997  357296 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1205 21:47:05.054068  357296 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1205 21:47:05.054078  357296 kubeadm.go:310] 
	I1205 21:47:05.054160  357296 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1205 21:47:05.054170  357296 kubeadm.go:310] 
	I1205 21:47:05.054239  357296 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1205 21:47:05.054248  357296 kubeadm.go:310] 
	I1205 21:47:05.054338  357296 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1205 21:47:05.054449  357296 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1205 21:47:05.054543  357296 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1205 21:47:05.054553  357296 kubeadm.go:310] 
	I1205 21:47:05.054660  357296 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1205 21:47:05.054796  357296 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1205 21:47:05.054822  357296 kubeadm.go:310] 
	I1205 21:47:05.054933  357296 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055054  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 \
	I1205 21:47:05.055090  357296 kubeadm.go:310] 	--control-plane 
	I1205 21:47:05.055098  357296 kubeadm.go:310] 
	I1205 21:47:05.055194  357296 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1205 21:47:05.055206  357296 kubeadm.go:310] 
	I1205 21:47:05.055314  357296 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b8c9g8.26tr6ftn8ovs2kwi \
	I1205 21:47:05.055451  357296 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:866fe1ad02aa39e4ff851d5daf02d95c9877aa8ade45bb47593c206ab1df3b82 
	I1205 21:47:05.056406  357296 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:05.056455  357296 cni.go:84] Creating CNI manager for ""
	I1205 21:47:05.056466  357296 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 21:47:05.058934  357296 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I1205 21:47:05.060223  357296 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1205 21:47:05.072177  357296 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1205 21:47:05.094496  357296 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1205 21:47:05.094587  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.094625  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-425614 minikube.k8s.io/updated_at=2024_12_05T21_47_05_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=b47d04014329c55dc4f6ec6dd318af27b5175843 minikube.k8s.io/name=embed-certs-425614 minikube.k8s.io/primary=true
	I1205 21:47:05.305636  357296 ops.go:34] apiserver oom_adj: -16
	I1205 21:47:05.305777  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:05.806175  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.306904  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:06.806069  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:07.306356  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:04.849777  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.345961  357831 pod_ready.go:103] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:07.847289  357831 pod_ready.go:93] pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.847323  357831 pod_ready.go:82] duration metric: took 7.509312906s for pod "coredns-7c65d6cfc9-tmd2t" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.847334  357831 pod_ready.go:79] waiting up to 6m0s for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.853980  357831 pod_ready.go:93] pod "etcd-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.854016  357831 pod_ready.go:82] duration metric: took 6.672926ms for pod "etcd-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.854030  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861465  357831 pod_ready.go:93] pod "kube-apiserver-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.861502  357831 pod_ready.go:82] duration metric: took 7.461726ms for pod "kube-apiserver-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.861517  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867007  357831 pod_ready.go:93] pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.867035  357831 pod_ready.go:82] duration metric: took 5.509386ms for pod "kube-controller-manager-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.867048  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872882  357831 pod_ready.go:93] pod "kube-proxy-98xqk" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:07.872917  357831 pod_ready.go:82] duration metric: took 5.859646ms for pod "kube-proxy-98xqk" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:07.872932  357831 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243619  357831 pod_ready.go:93] pod "kube-scheduler-no-preload-500648" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:08.243654  357831 pod_ready.go:82] duration metric: took 370.71203ms for pod "kube-scheduler-no-preload-500648" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:08.243666  357831 pod_ready.go:39] duration metric: took 11.461510993s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:08.243744  357831 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:08.243826  357831 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:08.260473  357831 api_server.go:72] duration metric: took 11.805209892s to wait for apiserver process to appear ...
	I1205 21:47:08.260511  357831 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:08.260538  357831 api_server.go:253] Checking apiserver healthz at https://192.168.50.141:8443/healthz ...
	I1205 21:47:08.264975  357831 api_server.go:279] https://192.168.50.141:8443/healthz returned 200:
	ok
	I1205 21:47:08.266178  357831 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:08.266206  357831 api_server.go:131] duration metric: took 5.687994ms to wait for apiserver health ...
	I1205 21:47:08.266214  357831 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:08.446775  357831 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:08.446811  357831 system_pods.go:61] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.446817  357831 system_pods.go:61] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.446821  357831 system_pods.go:61] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.446824  357831 system_pods.go:61] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.446828  357831 system_pods.go:61] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.446831  357831 system_pods.go:61] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.446834  357831 system_pods.go:61] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.446841  357831 system_pods.go:61] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.446881  357831 system_pods.go:61] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.446887  357831 system_pods.go:74] duration metric: took 180.667886ms to wait for pod list to return data ...
	I1205 21:47:08.446895  357831 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:08.643352  357831 default_sa.go:45] found service account: "default"
	I1205 21:47:08.643389  357831 default_sa.go:55] duration metric: took 196.485646ms for default service account to be created ...
	I1205 21:47:08.643405  357831 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:08.847094  357831 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:08.847129  357831 system_pods.go:89] "coredns-7c65d6cfc9-6gw87" [5551f12d-28e2-4abc-aa12-df5e94a50df9] Running
	I1205 21:47:08.847136  357831 system_pods.go:89] "coredns-7c65d6cfc9-tmd2t" [e3e98611-66c3-4647-8870-bff5ff6ec596] Running
	I1205 21:47:08.847140  357831 system_pods.go:89] "etcd-no-preload-500648" [74521d40-5021-4ced-b38c-526c57f76ef1] Running
	I1205 21:47:08.847144  357831 system_pods.go:89] "kube-apiserver-no-preload-500648" [c145b867-1112-495e-bbe4-a95582f41190] Running
	I1205 21:47:08.847147  357831 system_pods.go:89] "kube-controller-manager-no-preload-500648" [534c1c28-2a5c-411d-8d26-1636d92ed794] Running
	I1205 21:47:08.847150  357831 system_pods.go:89] "kube-proxy-98xqk" [4b383ba3-46c2-45df-9035-270593e44817] Running
	I1205 21:47:08.847153  357831 system_pods.go:89] "kube-scheduler-no-preload-500648" [7d088cd2-8ba3-4b3b-ab99-233ff13e2710] Running
	I1205 21:47:08.847162  357831 system_pods.go:89] "metrics-server-6867b74b74-ftmzl" [c541d531-1622-4528-af4c-f6147f47e8f5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:08.847168  357831 system_pods.go:89] "storage-provisioner" [62bd3876-3f92-4cc1-9e07-860628e8a746] Running
	I1205 21:47:08.847181  357831 system_pods.go:126] duration metric: took 203.767291ms to wait for k8s-apps to be running ...
	I1205 21:47:08.847195  357831 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:08.847250  357831 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:08.862597  357831 system_svc.go:56] duration metric: took 15.382518ms WaitForService to wait for kubelet
	I1205 21:47:08.862633  357831 kubeadm.go:582] duration metric: took 12.407380073s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:08.862656  357831 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:09.043731  357831 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:09.043757  357831 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:09.043771  357831 node_conditions.go:105] duration metric: took 181.109771ms to run NodePressure ...
	I1205 21:47:09.043784  357831 start.go:241] waiting for startup goroutines ...
	I1205 21:47:09.043791  357831 start.go:246] waiting for cluster config update ...
	I1205 21:47:09.043800  357831 start.go:255] writing updated cluster config ...
	I1205 21:47:09.044059  357831 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:09.097126  357831 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:09.098929  357831 out.go:177] * Done! kubectl is now configured to use "no-preload-500648" cluster and "default" namespace by default
	I1205 21:47:07.806545  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.306666  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:08.806027  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.306632  357296 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1205 21:47:09.463654  357296 kubeadm.go:1113] duration metric: took 4.369155567s to wait for elevateKubeSystemPrivileges
	I1205 21:47:09.463693  357296 kubeadm.go:394] duration metric: took 4m57.985307568s to StartCluster
	I1205 21:47:09.463727  357296 settings.go:142] acquiring lock: {Name:mkc15a06a1c732c8d48ba66a7f7dc19fed21173d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.463823  357296 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:47:09.465989  357296 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/kubeconfig: {Name:mkfc81a26dbc7d664716601846798a29e52c0214 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 21:47:09.466324  357296 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.72.8 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1205 21:47:09.466538  357296 config.go:182] Loaded profile config "embed-certs-425614": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:47:09.466462  357296 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1205 21:47:09.466593  357296 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-425614"
	I1205 21:47:09.466605  357296 addons.go:69] Setting default-storageclass=true in profile "embed-certs-425614"
	I1205 21:47:09.466623  357296 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-425614"
	I1205 21:47:09.466625  357296 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-425614"
	W1205 21:47:09.466632  357296 addons.go:243] addon storage-provisioner should already be in state true
	I1205 21:47:09.466670  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.466598  357296 addons.go:69] Setting metrics-server=true in profile "embed-certs-425614"
	I1205 21:47:09.466700  357296 addons.go:234] Setting addon metrics-server=true in "embed-certs-425614"
	W1205 21:47:09.466713  357296 addons.go:243] addon metrics-server should already be in state true
	I1205 21:47:09.466754  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.467117  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467136  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467168  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.467169  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467193  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.467287  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.468249  357296 out.go:177] * Verifying Kubernetes components...
	I1205 21:47:09.471163  357296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1205 21:47:09.485298  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36079
	I1205 21:47:09.485497  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39781
	I1205 21:47:09.485948  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486029  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.486534  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486563  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486657  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.486685  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.486742  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40043
	I1205 21:47:09.486978  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487032  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.487232  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.487236  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.487624  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.487674  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.487789  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.487833  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.488214  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.488851  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.488896  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.491055  357296 addons.go:234] Setting addon default-storageclass=true in "embed-certs-425614"
	W1205 21:47:09.491080  357296 addons.go:243] addon default-storageclass should already be in state true
	I1205 21:47:09.491112  357296 host.go:66] Checking if "embed-certs-425614" exists ...
	I1205 21:47:09.491489  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.491536  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.505783  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42923
	I1205 21:47:09.506685  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.507389  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.507418  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.507849  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.508072  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.509039  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44837
	I1205 21:47:09.509662  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.510051  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.510539  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.510554  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.510945  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.511175  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.512088  357296 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1205 21:47:09.513011  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.513375  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1205 21:47:09.513394  357296 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1205 21:47:09.513411  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.514693  357296 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1205 21:47:09.516172  357296 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.516192  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1205 21:47:09.516216  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.516960  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517462  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.517489  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.517621  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.517830  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45697
	I1205 21:47:09.518205  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.518478  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.519298  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.519323  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.519342  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.519547  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.520304  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.521019  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.521625  357296 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:47:09.521698  357296 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:47:09.522476  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.522492  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.522707  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.522891  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.523193  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.523744  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.540654  357296 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41511
	I1205 21:47:09.541226  357296 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:47:09.541763  357296 main.go:141] libmachine: Using API Version  1
	I1205 21:47:09.541790  357296 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:47:09.542269  357296 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:47:09.542512  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetState
	I1205 21:47:09.544396  357296 main.go:141] libmachine: (embed-certs-425614) Calling .DriverName
	I1205 21:47:09.544676  357296 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.544693  357296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1205 21:47:09.544715  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHHostname
	I1205 21:47:09.548238  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548523  357296 main.go:141] libmachine: (embed-certs-425614) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d8:bb:db", ip: ""} in network mk-embed-certs-425614: {Iface:virbr4 ExpiryTime:2024-12-05 22:32:41 +0000 UTC Type:0 Mac:52:54:00:d8:bb:db Iaid: IPaddr:192.168.72.8 Prefix:24 Hostname:embed-certs-425614 Clientid:01:52:54:00:d8:bb:db}
	I1205 21:47:09.548562  357296 main.go:141] libmachine: (embed-certs-425614) DBG | domain embed-certs-425614 has defined IP address 192.168.72.8 and MAC address 52:54:00:d8:bb:db in network mk-embed-certs-425614
	I1205 21:47:09.548702  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHPort
	I1205 21:47:09.548931  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHKeyPath
	I1205 21:47:09.549113  357296 main.go:141] libmachine: (embed-certs-425614) Calling .GetSSHUsername
	I1205 21:47:09.549291  357296 sshutil.go:53] new ssh client: &{IP:192.168.72.8 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/embed-certs-425614/id_rsa Username:docker}
	I1205 21:47:09.668547  357296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1205 21:47:09.687925  357296 node_ready.go:35] waiting up to 6m0s for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697641  357296 node_ready.go:49] node "embed-certs-425614" has status "Ready":"True"
	I1205 21:47:09.697666  357296 node_ready.go:38] duration metric: took 9.705064ms for node "embed-certs-425614" to be "Ready" ...
	I1205 21:47:09.697675  357296 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:09.704768  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:09.753311  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1205 21:47:09.793855  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1205 21:47:09.799918  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1205 21:47:09.799943  357296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1205 21:47:09.845109  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1205 21:47:09.845140  357296 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1205 21:47:09.910753  357296 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:09.910784  357296 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1205 21:47:09.965476  357296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1205 21:47:10.269090  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269126  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269096  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269235  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269576  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269640  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269641  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269620  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.269587  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.269745  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.269758  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269770  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.269664  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.269860  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.270030  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.270047  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270058  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.270064  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.270071  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.301524  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.301550  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.301895  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.301936  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926349  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926377  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926716  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.926741  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.926752  357296 main.go:141] libmachine: Making call to close driver server
	I1205 21:47:10.926761  357296 main.go:141] libmachine: (embed-certs-425614) Calling .Close
	I1205 21:47:10.926768  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927106  357296 main.go:141] libmachine: (embed-certs-425614) DBG | Closing plugin on server side
	I1205 21:47:10.927155  357296 main.go:141] libmachine: Successfully made call to close driver server
	I1205 21:47:10.927166  357296 main.go:141] libmachine: Making call to close connection to plugin binary
	I1205 21:47:10.927180  357296 addons.go:475] Verifying addon metrics-server=true in "embed-certs-425614"
	I1205 21:47:10.929085  357296 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I1205 21:47:10.930576  357296 addons.go:510] duration metric: took 1.464128267s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I1205 21:47:11.713166  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:11.713198  357296 pod_ready.go:82] duration metric: took 2.008396953s for pod "coredns-7c65d6cfc9-7sjzc" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:11.713211  357296 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:13.503828  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:13.504090  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:13.720235  357296 pod_ready.go:103] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:15.220057  357296 pod_ready.go:93] pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.220088  357296 pod_ready.go:82] duration metric: took 3.506868256s for pod "coredns-7c65d6cfc9-qfwx8" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.220102  357296 pod_ready.go:79] waiting up to 6m0s for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225450  357296 pod_ready.go:93] pod "etcd-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.225477  357296 pod_ready.go:82] duration metric: took 5.36753ms for pod "etcd-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.225487  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231162  357296 pod_ready.go:93] pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:15.231191  357296 pod_ready.go:82] duration metric: took 5.697176ms for pod "kube-apiserver-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:15.231203  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739452  357296 pod_ready.go:93] pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.739480  357296 pod_ready.go:82] duration metric: took 1.508268597s for pod "kube-controller-manager-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.739490  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745046  357296 pod_ready.go:93] pod "kube-proxy-k2zgx" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:16.745069  357296 pod_ready.go:82] duration metric: took 5.572779ms for pod "kube-proxy-k2zgx" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:16.745077  357296 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:18.752726  357296 pod_ready.go:103] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"False"
	I1205 21:47:19.252349  357296 pod_ready.go:93] pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace has status "Ready":"True"
	I1205 21:47:19.252381  357296 pod_ready.go:82] duration metric: took 2.507297045s for pod "kube-scheduler-embed-certs-425614" in "kube-system" namespace to be "Ready" ...
	I1205 21:47:19.252391  357296 pod_ready.go:39] duration metric: took 9.554704391s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1205 21:47:19.252414  357296 api_server.go:52] waiting for apiserver process to appear ...
	I1205 21:47:19.252484  357296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:47:19.271589  357296 api_server.go:72] duration metric: took 9.805214037s to wait for apiserver process to appear ...
	I1205 21:47:19.271628  357296 api_server.go:88] waiting for apiserver healthz status ...
	I1205 21:47:19.271659  357296 api_server.go:253] Checking apiserver healthz at https://192.168.72.8:8443/healthz ...
	I1205 21:47:19.276411  357296 api_server.go:279] https://192.168.72.8:8443/healthz returned 200:
	ok
	I1205 21:47:19.277872  357296 api_server.go:141] control plane version: v1.31.2
	I1205 21:47:19.277926  357296 api_server.go:131] duration metric: took 6.2875ms to wait for apiserver health ...
	I1205 21:47:19.277941  357296 system_pods.go:43] waiting for kube-system pods to appear ...
	I1205 21:47:19.283899  357296 system_pods.go:59] 9 kube-system pods found
	I1205 21:47:19.283931  357296 system_pods.go:61] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.283937  357296 system_pods.go:61] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.283940  357296 system_pods.go:61] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.283944  357296 system_pods.go:61] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.283947  357296 system_pods.go:61] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.283952  357296 system_pods.go:61] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.283955  357296 system_pods.go:61] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.283962  357296 system_pods.go:61] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.283968  357296 system_pods.go:61] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.283979  357296 system_pods.go:74] duration metric: took 6.030697ms to wait for pod list to return data ...
	I1205 21:47:19.283989  357296 default_sa.go:34] waiting for default service account to be created ...
	I1205 21:47:19.287433  357296 default_sa.go:45] found service account: "default"
	I1205 21:47:19.287469  357296 default_sa.go:55] duration metric: took 3.461011ms for default service account to be created ...
	I1205 21:47:19.287482  357296 system_pods.go:116] waiting for k8s-apps to be running ...
	I1205 21:47:19.420448  357296 system_pods.go:86] 9 kube-system pods found
	I1205 21:47:19.420493  357296 system_pods.go:89] "coredns-7c65d6cfc9-7sjzc" [9688302a-e62f-46e6-8182-4639deb5ac5a] Running
	I1205 21:47:19.420503  357296 system_pods.go:89] "coredns-7c65d6cfc9-qfwx8" [d6411440-5d63-4ea4-b1ba-58337dd6bb10] Running
	I1205 21:47:19.420510  357296 system_pods.go:89] "etcd-embed-certs-425614" [2f0ed9d7-d48b-4d68-96bb-5e3f6de80967] Running
	I1205 21:47:19.420516  357296 system_pods.go:89] "kube-apiserver-embed-certs-425614" [86a3b6ce-6b70-4af9-bf4a-2615e7a45c3f] Running
	I1205 21:47:19.420531  357296 system_pods.go:89] "kube-controller-manager-embed-certs-425614" [589710e5-a8e3-48ed-a57a-1fbf0219359a] Running
	I1205 21:47:19.420536  357296 system_pods.go:89] "kube-proxy-k2zgx" [8e5c4695-0631-486d-9f2b-3529f6e808e9] Running
	I1205 21:47:19.420542  357296 system_pods.go:89] "kube-scheduler-embed-certs-425614" [dec1c4cb-9e21-42f0-9e03-0651fdfa35e9] Running
	I1205 21:47:19.420551  357296 system_pods.go:89] "metrics-server-6867b74b74-hghhs" [bc00b855-1cc8-45a1-92cb-b459ef0b40eb] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1205 21:47:19.420560  357296 system_pods.go:89] "storage-provisioner" [76565dbe-57b0-4d39-abb0-ca6787cd3740] Running
	I1205 21:47:19.420570  357296 system_pods.go:126] duration metric: took 133.080361ms to wait for k8s-apps to be running ...
	I1205 21:47:19.420581  357296 system_svc.go:44] waiting for kubelet service to be running ....
	I1205 21:47:19.420640  357296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:19.436855  357296 system_svc.go:56] duration metric: took 16.264247ms WaitForService to wait for kubelet
	I1205 21:47:19.436889  357296 kubeadm.go:582] duration metric: took 9.970523712s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1205 21:47:19.436913  357296 node_conditions.go:102] verifying NodePressure condition ...
	I1205 21:47:19.617690  357296 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1205 21:47:19.617724  357296 node_conditions.go:123] node cpu capacity is 2
	I1205 21:47:19.617737  357296 node_conditions.go:105] duration metric: took 180.817811ms to run NodePressure ...
	I1205 21:47:19.617753  357296 start.go:241] waiting for startup goroutines ...
	I1205 21:47:19.617763  357296 start.go:246] waiting for cluster config update ...
	I1205 21:47:19.617782  357296 start.go:255] writing updated cluster config ...
	I1205 21:47:19.618105  357296 ssh_runner.go:195] Run: rm -f paused
	I1205 21:47:19.670657  357296 start.go:600] kubectl: 1.31.3, cluster: 1.31.2 (minor skew: 0)
	I1205 21:47:19.672596  357296 out.go:177] * Done! kubectl is now configured to use "embed-certs-425614" cluster and "default" namespace by default
	I1205 21:47:53.504952  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:47:53.505292  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:47:53.505331  358357 kubeadm.go:310] 
	I1205 21:47:53.505381  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:47:53.505424  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:47:53.505431  358357 kubeadm.go:310] 
	I1205 21:47:53.505493  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:47:53.505540  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:47:53.505687  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:47:53.505696  358357 kubeadm.go:310] 
	I1205 21:47:53.505840  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:47:53.505918  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:47:53.505969  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:47:53.505978  358357 kubeadm.go:310] 
	I1205 21:47:53.506113  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:47:53.506224  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:47:53.506234  358357 kubeadm.go:310] 
	I1205 21:47:53.506378  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:47:53.506488  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:47:53.506579  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:47:53.506669  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:47:53.506680  358357 kubeadm.go:310] 
	I1205 21:47:53.507133  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:47:53.507293  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:47:53.507399  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	W1205 21:47:53.507583  358357 out.go:270] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I1205 21:47:53.507635  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I1205 21:47:58.918917  358357 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (5.411249531s)
	I1205 21:47:58.919047  358357 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:47:58.933824  358357 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1205 21:47:58.943937  358357 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1205 21:47:58.943961  358357 kubeadm.go:157] found existing configuration files:
	
	I1205 21:47:58.944019  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1205 21:47:58.953302  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1205 21:47:58.953376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1205 21:47:58.963401  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1205 21:47:58.973241  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1205 21:47:58.973342  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1205 21:47:58.982980  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1205 21:47:58.992301  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1205 21:47:58.992376  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1205 21:47:59.002794  358357 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1205 21:47:59.012679  358357 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1205 21:47:59.012749  358357 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1205 21:47:59.023775  358357 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1205 21:47:59.094520  358357 kubeadm.go:310] [init] Using Kubernetes version: v1.20.0
	I1205 21:47:59.094668  358357 kubeadm.go:310] [preflight] Running pre-flight checks
	I1205 21:47:59.233248  358357 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1205 21:47:59.233420  358357 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1205 21:47:59.233569  358357 kubeadm.go:310] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1205 21:47:59.418344  358357 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1205 21:47:59.420333  358357 out.go:235]   - Generating certificates and keys ...
	I1205 21:47:59.420467  358357 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1205 21:47:59.420553  358357 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1205 21:47:59.422458  358357 kubeadm.go:310] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1205 21:47:59.422606  358357 kubeadm.go:310] [certs] Using existing front-proxy-ca certificate authority
	I1205 21:47:59.422717  358357 kubeadm.go:310] [certs] Using existing front-proxy-client certificate and key on disk
	I1205 21:47:59.422802  358357 kubeadm.go:310] [certs] Using existing etcd/ca certificate authority
	I1205 21:47:59.422889  358357 kubeadm.go:310] [certs] Using existing etcd/server certificate and key on disk
	I1205 21:47:59.422998  358357 kubeadm.go:310] [certs] Using existing etcd/peer certificate and key on disk
	I1205 21:47:59.423099  358357 kubeadm.go:310] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1205 21:47:59.423222  358357 kubeadm.go:310] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1205 21:47:59.423283  358357 kubeadm.go:310] [certs] Using the existing "sa" key
	I1205 21:47:59.423376  358357 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1205 21:47:59.599862  358357 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1205 21:47:59.763783  358357 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1205 21:47:59.854070  358357 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1205 21:48:00.213384  358357 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1205 21:48:00.228512  358357 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1205 21:48:00.229454  358357 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1205 21:48:00.229505  358357 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1205 21:48:00.369826  358357 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1205 21:48:00.371919  358357 out.go:235]   - Booting up control plane ...
	I1205 21:48:00.372059  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1205 21:48:00.382814  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1205 21:48:00.384284  358357 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1205 21:48:00.385894  358357 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1205 21:48:00.388267  358357 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1205 21:48:40.389474  358357 kubeadm.go:310] [kubelet-check] Initial timeout of 40s passed.
	I1205 21:48:40.389611  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:40.389883  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:45.390223  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:45.390529  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:48:55.390550  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:48:55.390784  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:15.391410  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:15.391608  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392061  358357 kubeadm.go:310] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I1205 21:49:55.392321  358357 kubeadm.go:310] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I1205 21:49:55.392332  358357 kubeadm.go:310] 
	I1205 21:49:55.392403  358357 kubeadm.go:310] 	Unfortunately, an error has occurred:
	I1205 21:49:55.392464  358357 kubeadm.go:310] 		timed out waiting for the condition
	I1205 21:49:55.392485  358357 kubeadm.go:310] 
	I1205 21:49:55.392538  358357 kubeadm.go:310] 	This error is likely caused by:
	I1205 21:49:55.392587  358357 kubeadm.go:310] 		- The kubelet is not running
	I1205 21:49:55.392729  358357 kubeadm.go:310] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1205 21:49:55.392761  358357 kubeadm.go:310] 
	I1205 21:49:55.392882  358357 kubeadm.go:310] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1205 21:49:55.392933  358357 kubeadm.go:310] 		- 'systemctl status kubelet'
	I1205 21:49:55.393025  358357 kubeadm.go:310] 		- 'journalctl -xeu kubelet'
	I1205 21:49:55.393057  358357 kubeadm.go:310] 
	I1205 21:49:55.393186  358357 kubeadm.go:310] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I1205 21:49:55.393293  358357 kubeadm.go:310] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I1205 21:49:55.393303  358357 kubeadm.go:310] 
	I1205 21:49:55.393453  358357 kubeadm.go:310] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I1205 21:49:55.393602  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I1205 21:49:55.393728  358357 kubeadm.go:310] 		Once you have found the failing container, you can inspect its logs with:
	I1205 21:49:55.393827  358357 kubeadm.go:310] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I1205 21:49:55.393841  358357 kubeadm.go:310] 
	I1205 21:49:55.394194  358357 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1205 21:49:55.394317  358357 kubeadm.go:310] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I1205 21:49:55.394473  358357 kubeadm.go:310] To see the stack trace of this error execute with --v=5 or higher
	I1205 21:49:55.394527  358357 kubeadm.go:394] duration metric: took 8m1.54013905s to StartCluster
	I1205 21:49:55.394598  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1205 21:49:55.394662  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1205 21:49:55.433172  358357 cri.go:89] found id: ""
	I1205 21:49:55.433203  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.433212  358357 logs.go:284] No container was found matching "kube-apiserver"
	I1205 21:49:55.433219  358357 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1205 21:49:55.433279  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1205 21:49:55.468595  358357 cri.go:89] found id: ""
	I1205 21:49:55.468631  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.468644  358357 logs.go:284] No container was found matching "etcd"
	I1205 21:49:55.468652  358357 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1205 21:49:55.468747  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1205 21:49:55.505657  358357 cri.go:89] found id: ""
	I1205 21:49:55.505692  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.505701  358357 logs.go:284] No container was found matching "coredns"
	I1205 21:49:55.505709  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1205 21:49:55.505776  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1205 21:49:55.542189  358357 cri.go:89] found id: ""
	I1205 21:49:55.542221  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.542230  358357 logs.go:284] No container was found matching "kube-scheduler"
	I1205 21:49:55.542236  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1205 21:49:55.542303  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1205 21:49:55.575752  358357 cri.go:89] found id: ""
	I1205 21:49:55.575796  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.575810  358357 logs.go:284] No container was found matching "kube-proxy"
	I1205 21:49:55.575818  358357 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1205 21:49:55.575878  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1205 21:49:55.611845  358357 cri.go:89] found id: ""
	I1205 21:49:55.611884  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.611899  358357 logs.go:284] No container was found matching "kube-controller-manager"
	I1205 21:49:55.611912  358357 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1205 21:49:55.611999  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1205 21:49:55.650475  358357 cri.go:89] found id: ""
	I1205 21:49:55.650511  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.650524  358357 logs.go:284] No container was found matching "kindnet"
	I1205 21:49:55.650533  358357 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I1205 21:49:55.650605  358357 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I1205 21:49:55.684770  358357 cri.go:89] found id: ""
	I1205 21:49:55.684801  358357 logs.go:282] 0 containers: []
	W1205 21:49:55.684811  358357 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1205 21:49:55.684823  358357 logs.go:123] Gathering logs for describe nodes ...
	I1205 21:49:55.684839  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1205 21:49:55.752292  358357 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1205 21:49:55.752331  358357 logs.go:123] Gathering logs for CRI-O ...
	I1205 21:49:55.752351  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1205 21:49:55.869601  358357 logs.go:123] Gathering logs for container status ...
	I1205 21:49:55.869647  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1205 21:49:55.909724  358357 logs.go:123] Gathering logs for kubelet ...
	I1205 21:49:55.909761  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1205 21:49:55.959825  358357 logs.go:123] Gathering logs for dmesg ...
	I1205 21:49:55.959865  358357 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1205 21:49:55.973692  358357 out.go:418] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W1205 21:49:55.973759  358357 out.go:270] * 
	W1205 21:49:55.973866  358357 out.go:270] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.973884  358357 out.go:270] * 
	W1205 21:49:55.974814  358357 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1205 21:49:55.977939  358357 out.go:201] 
	W1205 21:49:55.979226  358357 out.go:270] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W1205 21:49:55.979261  358357 out.go:270] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1205 21:49:55.979285  358357 out.go:270] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1205 21:49:55.980590  358357 out.go:201] 
	
	
	==> CRI-O <==
	Dec 05 22:01:43 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:43.991965901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436103991945176,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=48775de3-4eb2-4065-bca5-4ae9de886a93 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:43 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:43.992540464Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b3aacc13-6689-4e02-b149-2e59a5da751a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:43 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:43.992605163Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b3aacc13-6689-4e02-b149-2e59a5da751a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:43 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:43.992639733Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=b3aacc13-6689-4e02-b149-2e59a5da751a name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.024816299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=fddca5e0-a158-4306-8efc-ba7cd717029d name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.024888266Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=fddca5e0-a158-4306-8efc-ba7cd717029d name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.025819074Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=330dd243-753f-4e7c-8395-b5e1ac60b949 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.026234281Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436104026204472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=330dd243-753f-4e7c-8395-b5e1ac60b949 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.026674861Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=fd7e4486-e948-4d60-9652-a979adea34f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.026723285Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=fd7e4486-e948-4d60-9652-a979adea34f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.026752891Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=fd7e4486-e948-4d60-9652-a979adea34f7 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.058497029Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=68f0a61d-31b4-432b-8bbe-4fcdbfb6c514 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.058585557Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=68f0a61d-31b4-432b-8bbe-4fcdbfb6c514 name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.059540883Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf190632-6b01-4f1a-9f64-4642ae4221e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.059918450Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436104059896520,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf190632-6b01-4f1a-9f64-4642ae4221e8 name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.060590049Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=011ebcf8-aebf-4fa6-ad47-aab72b255305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.060663930Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=011ebcf8-aebf-4fa6-ad47-aab72b255305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.060699235Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=011ebcf8-aebf-4fa6-ad47-aab72b255305 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.091881944Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7af8bdf4-fe67-4dd6-b8c1-a79032ed0d9c name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.091972581Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7af8bdf4-fe67-4dd6-b8c1-a79032ed0d9c name=/runtime.v1.RuntimeService/Version
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.092879636Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a614ed87-e6e5-499d-8755-7db884d5adcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.093398849Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1733436104093368240,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a614ed87-e6e5-499d-8755-7db884d5adcd name=/runtime.v1.ImageService/ImageFsInfo
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.094277668Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=060e20da-479c-439f-b50b-21013ee1ee05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.094329524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=060e20da-479c-439f-b50b-21013ee1ee05 name=/runtime.v1.RuntimeService/ListContainers
	Dec 05 22:01:44 old-k8s-version-601806 crio[631]: time="2024-12-05 22:01:44.094369393Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=060e20da-479c-439f-b50b-21013ee1ee05 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Dec 5 21:41] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.049612] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.037328] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.041940] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.017419] systemd-fstab-generator[116]: Ignoring "noauto" option for root device
	[  +1.591176] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000028] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.089329] systemd-fstab-generator[558]: Ignoring "noauto" option for root device
	[  +0.075166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.084879] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.248458] systemd-fstab-generator[584]: Ignoring "noauto" option for root device
	[  +0.177247] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.251172] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +6.361303] systemd-fstab-generator[878]: Ignoring "noauto" option for root device
	[  +0.072375] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.856883] systemd-fstab-generator[1003]: Ignoring "noauto" option for root device
	[Dec 5 21:42] kauditd_printk_skb: 46 callbacks suppressed
	[Dec 5 21:45] systemd-fstab-generator[5030]: Ignoring "noauto" option for root device
	[Dec 5 21:48] systemd-fstab-generator[5323]: Ignoring "noauto" option for root device
	[  +0.068423] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 22:01:44 up 20 min,  0 users,  load average: 0.09, 0.08, 0.04
	Linux old-k8s-version-601806 5.10.207 #1 SMP Wed Nov 6 22:25:03 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000bc3ef0)
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000961ef0, 0x4f0ac20, 0xc0000508c0, 0x1, 0xc00012a060)
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000e2380, 0xc00012a060)
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000ce37c0, 0xc000348aa0)
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Dec 05 22:01:42 old-k8s-version-601806 kubelet[6866]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Dec 05 22:01:42 old-k8s-version-601806 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Dec 05 22:01:42 old-k8s-version-601806 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 05 22:01:43 old-k8s-version-601806 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 143.
	Dec 05 22:01:43 old-k8s-version-601806 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Dec 05 22:01:43 old-k8s-version-601806 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Dec 05 22:01:43 old-k8s-version-601806 kubelet[6893]: I1205 22:01:43.145792    6893 server.go:416] Version: v1.20.0
	Dec 05 22:01:43 old-k8s-version-601806 kubelet[6893]: I1205 22:01:43.146231    6893 server.go:837] Client rotation is on, will bootstrap in background
	Dec 05 22:01:43 old-k8s-version-601806 kubelet[6893]: I1205 22:01:43.149056    6893 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Dec 05 22:01:43 old-k8s-version-601806 kubelet[6893]: I1205 22:01:43.150566    6893 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Dec 05 22:01:43 old-k8s-version-601806 kubelet[6893]: W1205 22:01:43.150765    6893 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 2 (270.312605ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-601806" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (162.28s)

                                                
                                    

Test pass (249/315)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 8.33
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.07
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.31.2/json-events 5.08
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.07
18 TestDownloadOnly/v1.31.2/DeleteAll 0.15
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.66
22 TestOffline 84.88
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 132.73
31 TestAddons/serial/GCPAuth/Namespaces 2.59
32 TestAddons/serial/GCPAuth/FakeCredentials 9.55
35 TestAddons/parallel/Registry 16.87
37 TestAddons/parallel/InspektorGadget 11.97
40 TestAddons/parallel/CSI 51.98
41 TestAddons/parallel/Headlamp 18.72
42 TestAddons/parallel/CloudSpanner 5.59
43 TestAddons/parallel/LocalPath 10.19
44 TestAddons/parallel/NvidiaDevicePlugin 5.76
45 TestAddons/parallel/Yakd 11.91
48 TestCertOptions 45.01
49 TestCertExpiration 314.14
51 TestForceSystemdFlag 98.31
52 TestForceSystemdEnv 46.41
54 TestKVMDriverInstallOrUpdate 8.2
58 TestErrorSpam/setup 40.39
59 TestErrorSpam/start 0.39
60 TestErrorSpam/status 0.8
61 TestErrorSpam/pause 1.63
62 TestErrorSpam/unpause 1.74
63 TestErrorSpam/stop 4.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 52.5
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 34.21
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.07
74 TestFunctional/serial/CacheCmd/cache/add_remote 3.61
75 TestFunctional/serial/CacheCmd/cache/add_local 2.06
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.25
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.82
80 TestFunctional/serial/CacheCmd/cache/delete 0.11
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 34.92
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.48
86 TestFunctional/serial/LogsFileCmd 1.48
87 TestFunctional/serial/InvalidService 5.35
89 TestFunctional/parallel/ConfigCmd 0.43
90 TestFunctional/parallel/DashboardCmd 32.72
91 TestFunctional/parallel/DryRun 0.32
92 TestFunctional/parallel/InternationalLanguage 0.16
93 TestFunctional/parallel/StatusCmd 1.02
97 TestFunctional/parallel/ServiceCmdConnect 8.78
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 36.92
101 TestFunctional/parallel/SSHCmd 0.42
102 TestFunctional/parallel/CpCmd 1.64
103 TestFunctional/parallel/MySQL 24.04
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.62
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.51
113 TestFunctional/parallel/License 0.29
114 TestFunctional/parallel/ServiceCmd/DeployApp 10.48
115 TestFunctional/parallel/Version/short 0.07
116 TestFunctional/parallel/Version/components 0.5
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.24
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.25
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.27
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.48
122 TestFunctional/parallel/ImageCommands/Setup 1.54
123 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
124 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
125 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
126 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.74
127 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1
128 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.08
129 TestFunctional/parallel/ImageCommands/ImageSaveToFile 3.05
130 TestFunctional/parallel/ServiceCmd/List 0.94
131 TestFunctional/parallel/ServiceCmd/JSONOutput 1.19
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.72
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 4.76
134 TestFunctional/parallel/ServiceCmd/HTTPS 0.36
135 TestFunctional/parallel/ServiceCmd/Format 0.33
136 TestFunctional/parallel/ServiceCmd/URL 0.34
137 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.89
139 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.48
140 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 16.4
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.36
144 TestFunctional/parallel/ProfileCmd/profile_list 0.34
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.35
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
147 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
151 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.12
152 TestFunctional/parallel/MountCmd/any-port 7.66
153 TestFunctional/parallel/MountCmd/specific-port 1.74
154 TestFunctional/parallel/MountCmd/VerifyCleanup 0.76
155 TestFunctional/delete_echo-server_images 0.03
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 198.25
162 TestMultiControlPlane/serial/DeployApp 6.28
163 TestMultiControlPlane/serial/PingHostFromPods 1.25
164 TestMultiControlPlane/serial/AddWorkerNode 58.25
165 TestMultiControlPlane/serial/NodeLabels 0.07
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.89
167 TestMultiControlPlane/serial/CopyFile 13.59
173 TestMultiControlPlane/serial/DeleteSecondaryNode 16.8
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.67
176 TestMultiControlPlane/serial/RestartCluster 345.13
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
178 TestMultiControlPlane/serial/AddSecondaryNode 79
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
183 TestJSONOutput/start/Command 55.86
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.71
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.61
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 7.35
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.21
211 TestMainNoArgs 0.05
212 TestMinikubeProfile 87.13
215 TestMountStart/serial/StartWithMountFirst 26.68
216 TestMountStart/serial/VerifyMountFirst 0.4
217 TestMountStart/serial/StartWithMountSecond 34.63
218 TestMountStart/serial/VerifyMountSecond 0.4
219 TestMountStart/serial/DeleteFirst 0.75
220 TestMountStart/serial/VerifyMountPostDelete 0.4
221 TestMountStart/serial/Stop 1.28
222 TestMountStart/serial/RestartStopped 22.93
223 TestMountStart/serial/VerifyMountPostStop 0.4
226 TestMultiNode/serial/FreshStart2Nodes 111.3
227 TestMultiNode/serial/DeployApp2Nodes 6.14
228 TestMultiNode/serial/PingHostFrom2Pods 0.85
229 TestMultiNode/serial/AddNode 48.19
230 TestMultiNode/serial/MultiNodeLabels 0.07
231 TestMultiNode/serial/ProfileList 0.61
232 TestMultiNode/serial/CopyFile 7.53
233 TestMultiNode/serial/StopNode 2.38
234 TestMultiNode/serial/StartAfterStop 39.01
236 TestMultiNode/serial/DeleteNode 2.46
238 TestMultiNode/serial/RestartMultiNode 184.91
239 TestMultiNode/serial/ValidateNameConflict 45.31
246 TestScheduledStopUnix 116.24
250 TestRunningBinaryUpgrade 179.84
255 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
259 TestNoKubernetes/serial/StartWithK8s 94.53
264 TestNetworkPlugins/group/false 3.33
268 TestNoKubernetes/serial/StartWithStopK8s 65.22
269 TestNoKubernetes/serial/Start 48.05
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
271 TestNoKubernetes/serial/ProfileList 1.94
272 TestNoKubernetes/serial/Stop 1.29
273 TestNoKubernetes/serial/StartNoArgs 22.22
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
275 TestStoppedBinaryUpgrade/Setup 0.56
276 TestStoppedBinaryUpgrade/Upgrade 100.13
285 TestPause/serial/Start 63.82
286 TestNetworkPlugins/group/auto/Start 70.72
287 TestStoppedBinaryUpgrade/MinikubeLogs 0.84
288 TestNetworkPlugins/group/kindnet/Start 94.91
290 TestNetworkPlugins/group/auto/KubeletFlags 0.22
291 TestNetworkPlugins/group/auto/NetCatPod 10.36
292 TestNetworkPlugins/group/auto/DNS 0.16
293 TestNetworkPlugins/group/auto/Localhost 0.13
294 TestNetworkPlugins/group/auto/HairPin 0.13
295 TestNetworkPlugins/group/calico/Start 94.32
296 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
297 TestNetworkPlugins/group/kindnet/KubeletFlags 0.22
298 TestNetworkPlugins/group/kindnet/NetCatPod 10.25
299 TestNetworkPlugins/group/kindnet/DNS 0.16
300 TestNetworkPlugins/group/kindnet/Localhost 0.17
301 TestNetworkPlugins/group/kindnet/HairPin 0.15
302 TestNetworkPlugins/group/custom-flannel/Start 95.22
303 TestNetworkPlugins/group/enable-default-cni/Start 99.74
304 TestNetworkPlugins/group/calico/ControllerPod 6.01
305 TestNetworkPlugins/group/calico/KubeletFlags 0.26
306 TestNetworkPlugins/group/calico/NetCatPod 12.27
307 TestNetworkPlugins/group/flannel/Start 76.69
308 TestNetworkPlugins/group/calico/DNS 0.2
309 TestNetworkPlugins/group/calico/Localhost 0.15
310 TestNetworkPlugins/group/calico/HairPin 0.15
311 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.25
312 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.26
313 TestNetworkPlugins/group/custom-flannel/DNS 0.18
314 TestNetworkPlugins/group/custom-flannel/Localhost 0.15
315 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
316 TestNetworkPlugins/group/bridge/Start 66.34
317 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.24
318 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.26
319 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
320 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
323 TestNetworkPlugins/group/enable-default-cni/HairPin 0.21
325 TestStartStop/group/embed-certs/serial/FirstStart 86.25
326 TestNetworkPlugins/group/flannel/ControllerPod 6.01
327 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
328 TestNetworkPlugins/group/flannel/NetCatPod 11.26
329 TestNetworkPlugins/group/bridge/KubeletFlags 0.42
330 TestNetworkPlugins/group/bridge/NetCatPod 13.08
331 TestNetworkPlugins/group/flannel/DNS 0.22
332 TestNetworkPlugins/group/flannel/Localhost 0.17
333 TestNetworkPlugins/group/flannel/HairPin 0.19
334 TestNetworkPlugins/group/bridge/DNS 0.18
335 TestNetworkPlugins/group/bridge/Localhost 0.14
336 TestNetworkPlugins/group/bridge/HairPin 0.13
338 TestStartStop/group/no-preload/serial/FirstStart 76.05
340 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 74.1
341 TestStartStop/group/embed-certs/serial/DeployApp 10.32
342 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.13
344 TestStartStop/group/no-preload/serial/DeployApp 9.28
345 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.27
346 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.98
348 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.93
351 TestStartStop/group/embed-certs/serial/SecondStart 672.68
356 TestStartStop/group/no-preload/serial/SecondStart 616.55
357 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 546.17
358 TestStartStop/group/old-k8s-version/serial/Stop 5.31
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
370 TestStartStop/group/newest-cni/serial/FirstStart 45.85
371 TestStartStop/group/newest-cni/serial/DeployApp 0
372 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.12
373 TestStartStop/group/newest-cni/serial/Stop 7.38
374 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
375 TestStartStop/group/newest-cni/serial/SecondStart 35.05
376 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
377 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.27
379 TestStartStop/group/newest-cni/serial/Pause 2.66
x
+
TestDownloadOnly/v1.20.0/json-events (8.33s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-401320 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-401320 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (8.325917406s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (8.33s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1205 20:19:26.396782  300765 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1205 20:19:26.396883  300765 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-401320
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-401320: exit status 85 (68.012176ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-401320 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |          |
	|         | -p download-only-401320        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:19:18
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:19:18.116312  300777 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:19:18.116439  300777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:18.116450  300777 out.go:358] Setting ErrFile to fd 2...
	I1205 20:19:18.116455  300777 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:18.116631  300777 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	W1205 20:19:18.116762  300777 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20053-293485/.minikube/config/config.json: open /home/jenkins/minikube-integration/20053-293485/.minikube/config/config.json: no such file or directory
	I1205 20:19:18.117348  300777 out.go:352] Setting JSON to true
	I1205 20:19:18.118394  300777 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10906,"bootTime":1733419052,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:19:18.118517  300777 start.go:139] virtualization: kvm guest
	I1205 20:19:18.121146  300777 out.go:97] [download-only-401320] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W1205 20:19:18.121317  300777 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball: no such file or directory
	I1205 20:19:18.121341  300777 notify.go:220] Checking for updates...
	I1205 20:19:18.122804  300777 out.go:169] MINIKUBE_LOCATION=20053
	I1205 20:19:18.124103  300777 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:19:18.125472  300777 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:19:18.126881  300777 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:18.128098  300777 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 20:19:18.130325  300777 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 20:19:18.130690  300777 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:19:18.167987  300777 out.go:97] Using the kvm2 driver based on user configuration
	I1205 20:19:18.168017  300777 start.go:297] selected driver: kvm2
	I1205 20:19:18.168024  300777 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:19:18.168392  300777 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:18.168506  300777 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:19:18.189796  300777 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:19:18.189868  300777 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:19:18.190456  300777 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 20:19:18.190653  300777 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:19:18.190703  300777 cni.go:84] Creating CNI manager for ""
	I1205 20:19:18.190750  300777 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:19:18.190760  300777 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:19:18.190843  300777 start.go:340] cluster config:
	{Name:download-only-401320 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-401320 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:19:18.191069  300777 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:18.193111  300777 out.go:97] Downloading VM boot image ...
	I1205 20:19:18.193149  300777 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/iso/amd64/minikube-v1.34.0-1730913550-19917-amd64.iso
	I1205 20:19:21.615007  300777 out.go:97] Starting "download-only-401320" primary control-plane node in "download-only-401320" cluster
	I1205 20:19:21.615052  300777 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:19:21.650817  300777 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:21.650860  300777 cache.go:56] Caching tarball of preloaded images
	I1205 20:19:21.651034  300777 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1205 20:19:21.653001  300777 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1205 20:19:21.653032  300777 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:19:21.681029  300777 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-401320 host does not exist
	  To start a cluster, run: "minikube start -p download-only-401320"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-401320
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (5.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-565473 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-565473 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.079698584s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (5.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1205 20:19:31.829700  300765 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1205 20:19:31.829757  300765 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-565473
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-565473: exit status 85 (71.370889ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-401320 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | -p download-only-401320        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| delete  | -p download-only-401320        | download-only-401320 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC | 05 Dec 24 20:19 UTC |
	| start   | -o=json --download-only        | download-only-565473 | jenkins | v1.34.0 | 05 Dec 24 20:19 UTC |                     |
	|         | -p download-only-565473        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/05 20:19:26
	Running on machine: ubuntu-20-agent-2
	Binary: Built with gc go1.23.2 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1205 20:19:26.795147  300981 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:19:26.795768  300981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:26.795787  300981 out.go:358] Setting ErrFile to fd 2...
	I1205 20:19:26.795794  300981 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:19:26.796227  300981 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:19:26.797420  300981 out.go:352] Setting JSON to true
	I1205 20:19:26.798553  300981 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":10915,"bootTime":1733419052,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:19:26.798702  300981 start.go:139] virtualization: kvm guest
	I1205 20:19:26.800772  300981 out.go:97] [download-only-565473] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:19:26.800984  300981 notify.go:220] Checking for updates...
	I1205 20:19:26.802222  300981 out.go:169] MINIKUBE_LOCATION=20053
	I1205 20:19:26.803632  300981 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:19:26.805134  300981 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:19:26.806569  300981 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:19:26.808145  300981 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W1205 20:19:26.811244  300981 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1205 20:19:26.811538  300981 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:19:26.846569  300981 out.go:97] Using the kvm2 driver based on user configuration
	I1205 20:19:26.846607  300981 start.go:297] selected driver: kvm2
	I1205 20:19:26.846614  300981 start.go:901] validating driver "kvm2" against <nil>
	I1205 20:19:26.847004  300981 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:26.847105  300981 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/20053-293485/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I1205 20:19:26.864998  300981 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.34.0
	I1205 20:19:26.865075  300981 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1205 20:19:26.865621  300981 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I1205 20:19:26.865775  300981 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1205 20:19:26.865806  300981 cni.go:84] Creating CNI manager for ""
	I1205 20:19:26.865841  300981 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I1205 20:19:26.865852  300981 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1205 20:19:26.865921  300981 start.go:340] cluster config:
	{Name:download-only-565473 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-565473 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:19:26.866030  300981 iso.go:125] acquiring lock: {Name:mka021df4677ae8663fba7cdbb31ebfc4b0185dc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1205 20:19:26.867995  300981 out.go:97] Starting "download-only-565473" primary control-plane node in "download-only-565473" cluster
	I1205 20:19:26.868019  300981 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:19:26.928934  300981 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:26.928977  300981 cache.go:56] Caching tarball of preloaded images
	I1205 20:19:26.929133  300981 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:19:26.930972  300981 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1205 20:19:26.930988  300981 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:19:26.960279  300981 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:fc069bc1785feafa8477333f3a79092d -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4
	I1205 20:19:30.252527  300981 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:19:30.252643  300981 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20053-293485/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-amd64.tar.lz4 ...
	I1205 20:19:31.034735  300981 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1205 20:19:31.035150  300981 profile.go:143] Saving config to /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/download-only-565473/config.json ...
	I1205 20:19:31.035191  300981 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/download-only-565473/config.json: {Name:mk3ccc05b66356821c9efbfb2e9d3b06463819d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1205 20:19:31.035431  300981 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1205 20:19:31.035625  300981 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/20053-293485/.minikube/cache/linux/amd64/v1.31.2/kubectl
	
	
	* The control-plane node download-only-565473 host does not exist
	  To start a cluster, run: "minikube start -p download-only-565473"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-565473
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.66s)

                                                
                                                
=== RUN   TestBinaryMirror
I1205 20:19:32.480262  300765 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-326413 --alsologtostderr --binary-mirror http://127.0.0.1:39531 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-326413" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-326413
--- PASS: TestBinaryMirror (0.66s)

                                                
                                    
x
+
TestOffline (84.88s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-939726 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-939726 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (1m23.380790181s)
helpers_test.go:175: Cleaning up "offline-crio-939726" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-939726
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-939726: (1.503718115s)
--- PASS: TestOffline (84.88s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-523528
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-523528: exit status 85 (69.736525ms)

                                                
                                                
-- stdout --
	* Profile "addons-523528" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-523528"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-523528
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-523528: exit status 85 (68.817501ms)

                                                
                                                
-- stdout --
	* Profile "addons-523528" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-523528"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (132.73s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-523528 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-523528 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m12.728682889s)
--- PASS: TestAddons/Setup (132.73s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (2.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-523528 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-523528 get secret gcp-auth -n new-namespace
addons_test.go:583: (dbg) Non-zero exit: kubectl --context addons-523528 get secret gcp-auth -n new-namespace: exit status 1 (77.704968ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): secrets "gcp-auth" not found

                                                
                                                
** /stderr **
addons_test.go:575: (dbg) Run:  kubectl --context addons-523528 logs -l app=gcp-auth -n gcp-auth
I1205 20:21:46.497057  300765 retry.go:31] will retry after 2.313243939s: %!w(<nil>): gcp-auth container logs: 
-- stdout --
	2024/12/05 20:21:45 GCP Auth Webhook started!
	2024/12/05 20:21:46 Ready to marshal response ...
	2024/12/05 20:21:46 Ready to write response ...

                                                
                                                
-- /stdout --
addons_test.go:583: (dbg) Run:  kubectl --context addons-523528 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (2.59s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-523528 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-523528 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [cfa655ea-794b-4c47-b060-9aaf959e839a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [cfa655ea-794b-4c47-b060-9aaf959e839a] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.004660653s
addons_test.go:633: (dbg) Run:  kubectl --context addons-523528 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-523528 describe sa gcp-auth-test
addons_test.go:683: (dbg) Run:  kubectl --context addons-523528 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.55s)

                                                
                                    
x
+
TestAddons/parallel/Registry (16.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 3.006221ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-6p9nr" [911c9fc9-5e67-4b4f-846e-2ad1cdc944c3] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004060961s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-zpfrw" [d071b7d1-01c6-4449-98a5-0e329f71db8e] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004541861s
addons_test.go:331: (dbg) Run:  kubectl --context addons-523528 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-523528 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-523528 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.043594368s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 ip
2024/12/05 20:22:45 [DEBUG] GET http://192.168.39.217:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (16.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-rklt6" [38ce09cf-5819-4569-91ca-a3da9bd43a8e] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005114878s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable inspektor-gadget --alsologtostderr -v=1: (5.963306158s)
--- PASS: TestAddons/parallel/InspektorGadget (11.97s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.98s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1205 20:22:47.117225  300765 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1205 20:22:47.130880  300765 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1205 20:22:47.130918  300765 kapi.go:107] duration metric: took 13.718356ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 13.734785ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-523528 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-523528 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [fbb263c7-dc74-4a17-8967-4c61c981dac8] Pending
helpers_test.go:344: "task-pv-pod" [fbb263c7-dc74-4a17-8967-4c61c981dac8] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [fbb263c7-dc74-4a17-8967-4c61c981dac8] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 12.004340498s
addons_test.go:511: (dbg) Run:  kubectl --context addons-523528 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-523528 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-523528 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-523528 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-523528 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-523528 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-523528 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [f887431d-704e-424a-bd9f-3d74ed3aaca0] Pending
helpers_test.go:344: "task-pv-pod-restore" [f887431d-704e-424a-bd9f-3d74ed3aaca0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [f887431d-704e-424a-bd9f-3d74ed3aaca0] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.00462286s
addons_test.go:553: (dbg) Run:  kubectl --context addons-523528 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-523528 delete pod task-pv-pod-restore: (1.343689688s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-523528 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-523528 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.881312369s)
--- PASS: TestAddons/parallel/CSI (51.98s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-523528 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-tldd5" [3e616cb2-837b-4329-9721-e347da0967f0] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-tldd5" [3e616cb2-837b-4329-9721-e347da0967f0] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004221289s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable headlamp --alsologtostderr -v=1: (5.827490366s)
--- PASS: TestAddons/parallel/Headlamp (18.72s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-r2k92" [e850f0fc-9dc2-4400-a1f9-a2984399db6e] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004034696s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.19s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-523528 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-523528 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5c47200a-af51-4390-815f-a17251f5093a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5c47200a-af51-4390-815f-a17251f5093a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5c47200a-af51-4390-815f-a17251f5093a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004054704s
addons_test.go:906: (dbg) Run:  kubectl --context addons-523528 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 ssh "cat /opt/local-path-provisioner/pvc-24f2de26-a653-44d0-af2f-07e5589c431c_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-523528 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-523528 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.19s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-sglbw" [0360c661-774c-46ac-a3df-fd26eb882587] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.004664826s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-6kthp" [376e4c76-b68a-4851-a0f0-05cd745db0dd] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003970758s
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-amd64 -p addons-523528 addons disable yakd --alsologtostderr -v=1: (5.907101755s)
--- PASS: TestAddons/parallel/Yakd (11.91s)

                                                
                                    
x
+
TestCertOptions (45.01s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-392353 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-392353 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (43.466395203s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-392353 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-392353 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-392353 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-392353" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-392353
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-392353: (1.04530402s)
--- PASS: TestCertOptions (45.01s)

                                                
                                    
x
+
TestCertExpiration (314.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-500745 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-500745 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m41.474856081s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-500745 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-500745 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (31.786586258s)
helpers_test.go:175: Cleaning up "cert-expiration-500745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-500745
--- PASS: TestCertExpiration (314.14s)

                                                
                                    
x
+
TestForceSystemdFlag (98.31s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-175684 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E1205 21:23:16.319322  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-175684 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m37.250480098s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-175684 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-175684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-175684
--- PASS: TestForceSystemdFlag (98.31s)

                                                
                                    
x
+
TestForceSystemdEnv (46.41s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-024419 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-024419 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (45.560430628s)
helpers_test.go:175: Cleaning up "force-systemd-env-024419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-024419
--- PASS: TestForceSystemdEnv (46.41s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (8.2s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I1205 21:22:13.198730  300765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 21:22:13.198910  300765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W1205 21:22:13.239684  300765 install.go:62] docker-machine-driver-kvm2: exit status 1
W1205 21:22:13.240152  300765 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 21:22:13.240251  300765 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2079490847/001/docker-machine-driver-kvm2
I1205 21:22:13.568568  300765 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2079490847/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0003c84d0 gz:0xc0003c84d8 tar:0xc0003c8470 tar.bz2:0xc0003c8490 tar.gz:0xc0003c84a0 tar.xz:0xc0003c84b0 tar.zst:0xc0003c84c0 tbz2:0xc0003c8490 tgz:0xc0003c84a0 txz:0xc0003c84b0 tzst:0xc0003c84c0 xz:0xc0003c84e0 zip:0xc0003c84f0 zst:0xc0003c84e8] Getters:map[file:0xc001d9d180 http:0xc000887900 https:0xc000887950] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 21:22:13.568640  300765 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2079490847/001/docker-machine-driver-kvm2
I1205 21:22:15.511569  300765 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I1205 21:22:15.511665  300765 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I1205 21:22:15.544364  300765 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W1205 21:22:15.544400  300765 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W1205 21:22:15.544470  300765 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I1205 21:22:15.544505  300765 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2079490847/002/docker-machine-driver-kvm2
I1205 21:22:15.611831  300765 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate2079490847/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020 0x5315020] Decompressors:map[bz2:0xc0003c84d0 gz:0xc0003c84d8 tar:0xc0003c8470 tar.bz2:0xc0003c8490 tar.gz:0xc0003c84a0 tar.xz:0xc0003c84b0 tar.zst:0xc0003c84c0 tbz2:0xc0003c8490 tgz:0xc0003c84a0 txz:0xc0003c84b0 tzst:0xc0003c84c0 xz:0xc0003c84e0 zip:0xc0003c84f0 zst:0xc0003c84e8] Getters:map[file:0xc0008b2880 http:0xc00091c500 https:0xc00091c550] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I1205 21:22:15.611898  300765 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate2079490847/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (8.20s)

                                                
                                    
x
+
TestErrorSpam/setup (40.39s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-935235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-935235 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-935235 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-935235 --driver=kvm2  --container-runtime=crio: (40.389190822s)
--- PASS: TestErrorSpam/setup (40.39s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 pause
--- PASS: TestErrorSpam/pause (1.63s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.74s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 unpause
--- PASS: TestErrorSpam/unpause (1.74s)

                                                
                                    
x
+
TestErrorSpam/stop (4.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 stop: (2.309484711s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-935235 --log_dir /tmp/nospam-935235 stop: (1.243068034s)
--- PASS: TestErrorSpam/stop (4.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20053-293485/.minikube/files/etc/test/nested/copy/300765/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (52.5s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
E1205 20:31:49.076452  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.082939  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.094470  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.115983  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.157456  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.238970  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.400653  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:49.722433  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-659667 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (52.503018555s)
--- PASS: TestFunctional/serial/StartWithProxy (52.50s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (34.21s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1205 20:31:50.270953  300765 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --alsologtostderr -v=8
E1205 20:31:50.364739  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:51.646691  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:54.209705  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:31:59.331398  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:32:09.573512  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-659667 --alsologtostderr -v=8: (34.206514059s)
functional_test.go:663: soft start took 34.207333509s for "functional-659667" cluster.
I1205 20:32:24.477869  300765 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (34.21s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-659667 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:3.1: (1.158529304s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:3.3: (1.273830324s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 cache add registry.k8s.io/pause:latest: (1.173613462s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.61s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-659667 /tmp/TestFunctionalserialCacheCmdcacheadd_local399271455/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache add minikube-local-cache-test:functional-659667
E1205 20:32:30.055310  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 cache add minikube-local-cache-test:functional-659667: (1.695672226s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache delete minikube-local-cache-test:functional-659667
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-659667
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.25s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (229.698205ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 cache reload: (1.089573169s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.82s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 kubectl -- --context functional-659667 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-659667 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.92s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-659667 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.922161787s)
functional_test.go:761: restart took 34.922292289s for "functional-659667" cluster.
I1205 20:33:07.710697  300765 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.92s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-659667 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 logs: (1.479064392s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 logs --file /tmp/TestFunctionalserialLogsFileCmd3151762861/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 logs --file /tmp/TestFunctionalserialLogsFileCmd3151762861/001/logs.txt: (1.480498957s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.35s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-659667 apply -f testdata/invalidsvc.yaml
E1205 20:33:11.016846  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-659667
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-659667: exit status 115 (315.986635ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.50.108:30451 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-659667 delete -f testdata/invalidsvc.yaml
functional_test.go:2327: (dbg) Done: kubectl --context functional-659667 delete -f testdata/invalidsvc.yaml: (1.820705233s)
--- PASS: TestFunctional/serial/InvalidService (5.35s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 config get cpus: exit status 14 (109.836488ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 config get cpus: exit status 14 (60.403008ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (32.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-659667 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-659667 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 308660: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (32.72s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-659667 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (163.619754ms)

                                                
                                                
-- stdout --
	* [functional-659667] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:33:19.073733  308479 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:33:19.074082  308479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:33:19.074096  308479 out.go:358] Setting ErrFile to fd 2...
	I1205 20:33:19.074102  308479 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:33:19.074387  308479 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:33:19.075172  308479 out.go:352] Setting JSON to false
	I1205 20:33:19.076669  308479 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11747,"bootTime":1733419052,"procs":239,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:33:19.076820  308479 start.go:139] virtualization: kvm guest
	I1205 20:33:19.079284  308479 out.go:177] * [functional-659667] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 20:33:19.080838  308479 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:33:19.080885  308479 notify.go:220] Checking for updates...
	I1205 20:33:19.083776  308479 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:33:19.085215  308479 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:33:19.086538  308479 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:33:19.087919  308479 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:33:19.089285  308479 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:33:19.090950  308479 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:33:19.091395  308479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:33:19.091463  308479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:33:19.108439  308479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40887
	I1205 20:33:19.109111  308479 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:33:19.109774  308479 main.go:141] libmachine: Using API Version  1
	I1205 20:33:19.109803  308479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:33:19.110306  308479 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:33:19.110564  308479 main.go:141] libmachine: (functional-659667) Calling .DriverName
	I1205 20:33:19.110900  308479 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:33:19.111358  308479 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:33:19.111421  308479 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:33:19.128051  308479 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40467
	I1205 20:33:19.128532  308479 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:33:19.129097  308479 main.go:141] libmachine: Using API Version  1
	I1205 20:33:19.129126  308479 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:33:19.129491  308479 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:33:19.129697  308479 main.go:141] libmachine: (functional-659667) Calling .DriverName
	I1205 20:33:19.169731  308479 out.go:177] * Using the kvm2 driver based on existing profile
	I1205 20:33:19.171022  308479 start.go:297] selected driver: kvm2
	I1205 20:33:19.171041  308479 start.go:901] validating driver "kvm2" against &{Name:functional-659667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-659667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:33:19.171219  308479 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:33:19.173696  308479 out.go:201] 
	W1205 20:33:19.175098  308479 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1205 20:33:19.176303  308479 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-659667 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-659667 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (155.809634ms)

                                                
                                                
-- stdout --
	* [functional-659667] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 20:33:19.397652  308546 out.go:345] Setting OutFile to fd 1 ...
	I1205 20:33:19.397813  308546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:33:19.397827  308546 out.go:358] Setting ErrFile to fd 2...
	I1205 20:33:19.397834  308546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 20:33:19.398169  308546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 20:33:19.398756  308546 out.go:352] Setting JSON to false
	I1205 20:33:19.399890  308546 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":11747,"bootTime":1733419052,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 20:33:19.400007  308546 start.go:139] virtualization: kvm guest
	I1205 20:33:19.402367  308546 out.go:177] * [functional-659667] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I1205 20:33:19.403848  308546 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 20:33:19.403915  308546 notify.go:220] Checking for updates...
	I1205 20:33:19.406552  308546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 20:33:19.408146  308546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 20:33:19.409755  308546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 20:33:19.411219  308546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 20:33:19.412707  308546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 20:33:19.414576  308546 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 20:33:19.414992  308546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:33:19.415056  308546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:33:19.430978  308546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46819
	I1205 20:33:19.431548  308546 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:33:19.432124  308546 main.go:141] libmachine: Using API Version  1
	I1205 20:33:19.432147  308546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:33:19.432526  308546 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:33:19.432716  308546 main.go:141] libmachine: (functional-659667) Calling .DriverName
	I1205 20:33:19.432956  308546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 20:33:19.433249  308546 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 20:33:19.433300  308546 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 20:33:19.452027  308546 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I1205 20:33:19.452590  308546 main.go:141] libmachine: () Calling .GetVersion
	I1205 20:33:19.453172  308546 main.go:141] libmachine: Using API Version  1
	I1205 20:33:19.453206  308546 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 20:33:19.453558  308546 main.go:141] libmachine: () Calling .GetMachineName
	I1205 20:33:19.453747  308546 main.go:141] libmachine: (functional-659667) Calling .DriverName
	I1205 20:33:19.489426  308546 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I1205 20:33:19.490757  308546 start.go:297] selected driver: kvm2
	I1205 20:33:19.490775  308546 start.go:901] validating driver "kvm2" against &{Name:functional-659667 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19917/minikube-v1.34.0-1730913550-19917-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1730888964-19917@sha256:629a5748e3ec15a091fef12257eb3754b8ffc0c974ebcbb016451c65d1829615 Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.31.2 ClusterName:functional-659667 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.108 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1205 20:33:19.490917  308546 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 20:33:19.492998  308546 out.go:201] 
	W1205 20:33:19.494384  308546 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1205 20:33:19.495613  308546 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-659667 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-659667 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-x7j72" [83a72fee-e96f-434e-ab72-38c8349561f6] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-x7j72" [83a72fee-e96f-434e-ab72-38c8349561f6] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.004874733s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.50.108:32202
functional_test.go:1675: http://192.168.50.108:32202: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-x7j72

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.50.108:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.50.108:32202
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.78s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (36.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [dfad5371-a4cf-4522-af48-3acc2e87ef0e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.019424654s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-659667 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-659667 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-659667 get pvc myclaim -o=json
I1205 20:33:36.039045  300765 retry.go:31] will retry after 1.833782279s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:425af71e-30d4-4797-bc2f-e28ece114d47 ResourceVersion:819 Generation:0 CreationTimestamp:2024-12-05 20:33:35 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0009c47e0 VolumeMode:0xc0009c47f0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-659667 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-659667 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [75de590d-d6b4-4d49-89fa-bd21851c7e2c] Pending
helpers_test.go:344: "sp-pod" [75de590d-d6b4-4d49-89fa-bd21851c7e2c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [75de590d-d6b4-4d49-89fa-bd21851c7e2c] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.007081458s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-659667 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-659667 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-659667 delete -f testdata/storage-provisioner/pod.yaml: (1.006420505s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-659667 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [4f9bff57-d282-4277-86f2-0e70b3c07b33] Pending
helpers_test.go:344: "sp-pod" [4f9bff57-d282-4277-86f2-0e70b3c07b33] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [4f9bff57-d282-4277-86f2-0e70b3c07b33] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005599873s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-659667 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (36.92s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh -n functional-659667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cp functional-659667:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3993337924/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh -n functional-659667 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh -n functional-659667 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.64s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-659667 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-c847k" [7604a8cf-9e78-4158-bdb2-7b870f9c88b3] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-c847k" [7604a8cf-9e78-4158-bdb2-7b870f9c88b3] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.043086065s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-659667 exec mysql-6cdb49bbb-c847k -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-659667 exec mysql-6cdb49bbb-c847k -- mysql -ppassword -e "show databases;": exit status 1 (474.965097ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 20:33:38.751980  300765 retry.go:31] will retry after 1.238313605s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-659667 exec mysql-6cdb49bbb-c847k -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-659667 exec mysql-6cdb49bbb-c847k -- mysql -ppassword -e "show databases;": exit status 1 (160.779478ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1205 20:33:40.151904  300765 retry.go:31] will retry after 1.695327143s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-659667 exec mysql-6cdb49bbb-c847k -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.04s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/300765/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /etc/test/nested/copy/300765/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/300765.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /etc/ssl/certs/300765.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/300765.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /usr/share/ca-certificates/300765.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3007652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /etc/ssl/certs/3007652.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/3007652.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /usr/share/ca-certificates/3007652.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.62s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-659667 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh "sudo systemctl is-active docker": exit status 1 (257.315409ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh "sudo systemctl is-active containerd": exit status 1 (254.663607ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-659667 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-659667 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-z4kwf" [0992bb09-277d-4157-82f9-9b05e5bd4c6e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-z4kwf" [0992bb09-277d-4157-82f9-9b05e5bd4c6e] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.251342402s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 version --short
--- PASS: TestFunctional/parallel/Version/short (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-659667 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-659667
localhost/kicbase/echo-server:functional-659667
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-659667 image ls --format short --alsologtostderr:
I1205 20:33:52.612701  309960 out.go:345] Setting OutFile to fd 1 ...
I1205 20:33:52.614991  309960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.615007  309960 out.go:358] Setting ErrFile to fd 2...
I1205 20:33:52.615014  309960 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.615449  309960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
I1205 20:33:52.616510  309960 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.616666  309960 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.617085  309960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.617138  309960 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.635442  309960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36599
I1205 20:33:52.635969  309960 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.636704  309960 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.636747  309960 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.637255  309960 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.637489  309960 main.go:141] libmachine: (functional-659667) Calling .GetState
I1205 20:33:52.640557  309960 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.640648  309960 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.657764  309960 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34279
I1205 20:33:52.658372  309960 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.658940  309960 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.658977  309960 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.659370  309960 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.659589  309960 main.go:141] libmachine: (functional-659667) Calling .DriverName
I1205 20:33:52.659827  309960 ssh_runner.go:195] Run: systemctl --version
I1205 20:33:52.659859  309960 main.go:141] libmachine: (functional-659667) Calling .GetSSHHostname
I1205 20:33:52.662928  309960 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.663524  309960 main.go:141] libmachine: (functional-659667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:e6:c6", ip: ""} in network mk-functional-659667: {Iface:virbr1 ExpiryTime:2024-12-05 21:31:12 +0000 UTC Type:0 Mac:52:54:00:1c:e6:c6 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:functional-659667 Clientid:01:52:54:00:1c:e6:c6}
I1205 20:33:52.663557  309960 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined IP address 192.168.50.108 and MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.663770  309960 main.go:141] libmachine: (functional-659667) Calling .GetSSHPort
I1205 20:33:52.664032  309960 main.go:141] libmachine: (functional-659667) Calling .GetSSHKeyPath
I1205 20:33:52.664231  309960 main.go:141] libmachine: (functional-659667) Calling .GetSSHUsername
I1205 20:33:52.664543  309960 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/functional-659667/id_rsa Username:docker}
I1205 20:33:52.749156  309960 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 20:33:52.787546  309960 main.go:141] libmachine: Making call to close driver server
I1205 20:33:52.787566  309960 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:52.787864  309960 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:52.787883  309960 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:52.787894  309960 main.go:141] libmachine: Making call to close driver server
I1205 20:33:52.787902  309960 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:52.788161  309960 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:52.788178  309960 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:52.788203  309960 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-659667 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 66f8bdd3810c9 | 196MB  |
| localhost/minikube-local-cache-test     | functional-659667  | cb6547f8b06f9 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 3a5bc24055c9e | 95MB   |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | 847c7bc1a5418 | 68.5MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | 9499c9960544e | 95.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 0486b6c53a1b5 | 89.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 505d571f5fd56 | 92.8MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
| docker.io/library/nginx                 | alpine             | 91ca84b4f5779 | 54MB   |
| localhost/kicbase/echo-server           | functional-659667  | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-659667 image ls --format table --alsologtostderr:
I1205 20:33:53.162443  310132 out.go:345] Setting OutFile to fd 1 ...
I1205 20:33:53.162704  310132 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:53.162713  310132 out.go:358] Setting ErrFile to fd 2...
I1205 20:33:53.162719  310132 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:53.162930  310132 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
I1205 20:33:53.163583  310132 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:53.163692  310132 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:53.164041  310132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:53.164087  310132 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:53.181048  310132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41461
I1205 20:33:53.181621  310132 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:53.182346  310132 main.go:141] libmachine: Using API Version  1
I1205 20:33:53.182376  310132 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:53.182779  310132 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:53.183002  310132 main.go:141] libmachine: (functional-659667) Calling .GetState
I1205 20:33:53.185009  310132 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:53.185058  310132 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:53.202628  310132 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
I1205 20:33:53.203230  310132 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:53.203875  310132 main.go:141] libmachine: Using API Version  1
I1205 20:33:53.203906  310132 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:53.204319  310132 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:53.204546  310132 main.go:141] libmachine: (functional-659667) Calling .DriverName
I1205 20:33:53.204767  310132 ssh_runner.go:195] Run: systemctl --version
I1205 20:33:53.204800  310132 main.go:141] libmachine: (functional-659667) Calling .GetSSHHostname
I1205 20:33:53.207851  310132 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:53.208314  310132 main.go:141] libmachine: (functional-659667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:e6:c6", ip: ""} in network mk-functional-659667: {Iface:virbr1 ExpiryTime:2024-12-05 21:31:12 +0000 UTC Type:0 Mac:52:54:00:1c:e6:c6 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:functional-659667 Clientid:01:52:54:00:1c:e6:c6}
I1205 20:33:53.208353  310132 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined IP address 192.168.50.108 and MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:53.208468  310132 main.go:141] libmachine: (functional-659667) Calling .GetSSHPort
I1205 20:33:53.208668  310132 main.go:141] libmachine: (functional-659667) Calling .GetSSHKeyPath
I1205 20:33:53.208824  310132 main.go:141] libmachine: (functional-659667) Calling .GetSSHUsername
I1205 20:33:53.209014  310132 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/functional-659667/id_rsa Username:docker}
I1205 20:33:53.296379  310132 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 20:33:53.354933  310132 main.go:141] libmachine: Making call to close driver server
I1205 20:33:53.354956  310132 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:53.355352  310132 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:53.355382  310132 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:53.355395  310132 main.go:141] libmachine: Making call to close driver server
I1205 20:33:53.355405  310132 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:53.355684  310132 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:53.355706  310132 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:53.355838  310132 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-659667 image ls --format json --alsologtostderr:
[{"id":"91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53958631"},{"id":"0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c","registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"89474374"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c9
8fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"94965812"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["d
ocker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e","repoDigests":["docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],"repoTags":["docker.io/library/nginx:latest"],"size":"195919252"},{"id":"cb6547f8b06f9c2b55d41c86b1f4d68f3ee0416afb3bf4cc8fd03ef723f8c2ef","repoDigests":["localhost/minikube-local-cache-test@sha256:739b937d3d450e817343039ed4cc28e4c57ac255e514dcdd1bd1db6e241aaf8a"],"repoTags":["localhost/minikube-local-cache-test:functional-659667"],"size":"3330"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189
a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173","repoDigests":["registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfb
e0","registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"95274464"},{"id":"505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38","repoDigests":["registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b","registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"92783513"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scrap
er@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-659667"],"size":"4943877"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629af
b18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"68457798"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-659667 image ls --format json --alsologtostderr:
I1205 20:33:52.900436  310061 out.go:345] Setting OutFile to fd 1 ...
I1205 20:33:52.900546  310061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.900556  310061 out.go:358] Setting ErrFile to fd 2...
I1205 20:33:52.900562  310061 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.900765  310061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
I1205 20:33:52.901405  310061 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.901552  310061 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.902001  310061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.902083  310061 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.917424  310061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45057
I1205 20:33:52.917941  310061 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.918658  310061 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.918681  310061 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.919165  310061 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.919505  310061 main.go:141] libmachine: (functional-659667) Calling .GetState
I1205 20:33:52.922340  310061 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.922452  310061 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.940503  310061 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46871
I1205 20:33:52.941048  310061 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.941588  310061 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.941611  310061 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.941972  310061 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.942148  310061 main.go:141] libmachine: (functional-659667) Calling .DriverName
I1205 20:33:52.942312  310061 ssh_runner.go:195] Run: systemctl --version
I1205 20:33:52.942345  310061 main.go:141] libmachine: (functional-659667) Calling .GetSSHHostname
I1205 20:33:52.945659  310061 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.946101  310061 main.go:141] libmachine: (functional-659667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:e6:c6", ip: ""} in network mk-functional-659667: {Iface:virbr1 ExpiryTime:2024-12-05 21:31:12 +0000 UTC Type:0 Mac:52:54:00:1c:e6:c6 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:functional-659667 Clientid:01:52:54:00:1c:e6:c6}
I1205 20:33:52.946130  310061 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined IP address 192.168.50.108 and MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.946301  310061 main.go:141] libmachine: (functional-659667) Calling .GetSSHPort
I1205 20:33:52.946502  310061 main.go:141] libmachine: (functional-659667) Calling .GetSSHKeyPath
I1205 20:33:52.946657  310061 main.go:141] libmachine: (functional-659667) Calling .GetSSHUsername
I1205 20:33:52.946793  310061 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/functional-659667/id_rsa Username:docker}
I1205 20:33:53.033515  310061 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 20:33:53.104348  310061 main.go:141] libmachine: Making call to close driver server
I1205 20:33:53.104372  310061 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:53.104612  310061 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:53.104630  310061 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:53.104645  310061 main.go:141] libmachine: Making call to close driver server
I1205 20:33:53.104653  310061 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:53.104878  310061 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:53.104894  310061 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
I1205 20:33:53.104896  310061 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-659667 image ls --format yaml --alsologtostderr:
- id: 3a5bc24055c9ebfdf31b23eef58eb4bb79b8e1772c483e6aebd2a99b41d99e52
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:e1b7077a015216fd2772941babf3d3204abecf98b97d82ecd149d00212c55fa7
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "94965812"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-659667
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 0486b6c53a1b5af26f2ad2fb89a089e04c6baa6369f8545ab0854f9d62b44503
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:4ba16ce7d80945dc4bb8e85ac0794c6171bfa8a55c94fe5be415afb4c3eb938c
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "89474374"
- id: 847c7bc1a541865e150af08318f49d02d0e0cff4a0530fd4ffe369e294dd2856
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:a40aba236dfcd0fe9d1258dcb9d22a82d83e9ea6d35f7d3574949b12df928fe5
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "68457798"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 91ca84b4f57794f97f70443afccff26aed771e36bc48bad1e26c2ce66124ea66
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:b1f7437a6d0398a47a5d74a1e178ea6fff3ea692c9e41d19c2b3f7ce52cdb371
repoTags:
- docker.io/library/nginx:alpine
size: "53958631"
- id: cb6547f8b06f9c2b55d41c86b1f4d68f3ee0416afb3bf4cc8fd03ef723f8c2ef
repoDigests:
- localhost/minikube-local-cache-test@sha256:739b937d3d450e817343039ed4cc28e4c57ac255e514dcdd1bd1db6e241aaf8a
repoTags:
- localhost/minikube-local-cache-test:functional-659667
size: "3330"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 505d571f5fd56726488d27af0d9a8e02c6db58f5d62ea51dd10d47de7a0c2d38
repoDigests:
- registry.k8s.io/kube-proxy@sha256:22535649599e9f22b1b857afcbd9a8b36be238b2b3ea68e47f60bedcea48cd3b
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "92783513"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 66f8bdd3810c96dc5c28aec39583af731b34a2cd99471530f53c8794ed5b423e
repoDigests:
- docker.io/library/nginx@sha256:3d696e8357051647b844d8c7cf4a0aa71e84379999a4f6af9b8ca1f7919ade42
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "195919252"
- id: 9499c9960544e80a96c223cdc5d3059dd7c2cc37ea20e7138af4a6e415a49173
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
- registry.k8s.io/kube-apiserver@sha256:a4fdc0ebc2950d76f2859c5f38f2d05b760ed09fd8006d7a8d98dd9b30bc55da
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "95274464"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-659667 image ls --format yaml --alsologtostderr:
I1205 20:33:52.646293  309979 out.go:345] Setting OutFile to fd 1 ...
I1205 20:33:52.646403  309979 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.646416  309979 out.go:358] Setting ErrFile to fd 2...
I1205 20:33:52.646420  309979 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:52.646613  309979 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
I1205 20:33:52.647259  309979 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.647387  309979 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:52.647767  309979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.647807  309979 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.665249  309979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45209
I1205 20:33:52.665789  309979 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.666543  309979 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.666569  309979 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.667066  309979 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.667285  309979 main.go:141] libmachine: (functional-659667) Calling .GetState
I1205 20:33:52.669338  309979 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:52.669386  309979 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:52.686488  309979 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46411
I1205 20:33:52.687074  309979 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:52.687705  309979 main.go:141] libmachine: Using API Version  1
I1205 20:33:52.687737  309979 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:52.688460  309979 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:52.688694  309979 main.go:141] libmachine: (functional-659667) Calling .DriverName
I1205 20:33:52.688920  309979 ssh_runner.go:195] Run: systemctl --version
I1205 20:33:52.688958  309979 main.go:141] libmachine: (functional-659667) Calling .GetSSHHostname
I1205 20:33:52.692618  309979 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.693041  309979 main.go:141] libmachine: (functional-659667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:e6:c6", ip: ""} in network mk-functional-659667: {Iface:virbr1 ExpiryTime:2024-12-05 21:31:12 +0000 UTC Type:0 Mac:52:54:00:1c:e6:c6 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:functional-659667 Clientid:01:52:54:00:1c:e6:c6}
I1205 20:33:52.693065  309979 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined IP address 192.168.50.108 and MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:52.693248  309979 main.go:141] libmachine: (functional-659667) Calling .GetSSHPort
I1205 20:33:52.693472  309979 main.go:141] libmachine: (functional-659667) Calling .GetSSHKeyPath
I1205 20:33:52.693637  309979 main.go:141] libmachine: (functional-659667) Calling .GetSSHUsername
I1205 20:33:52.693762  309979 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/functional-659667/id_rsa Username:docker}
I1205 20:33:52.793362  309979 ssh_runner.go:195] Run: sudo crictl images --output json
I1205 20:33:52.835382  309979 main.go:141] libmachine: Making call to close driver server
I1205 20:33:52.835400  309979 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:52.835683  309979 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
I1205 20:33:52.835717  309979 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:52.835732  309979 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:52.835745  309979 main.go:141] libmachine: Making call to close driver server
I1205 20:33:52.835755  309979 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:52.836061  309979 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
I1205 20:33:52.836126  309979 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:52.836136  309979 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh pgrep buildkitd: exit status 1 (230.516915ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image build -t localhost/my-image:functional-659667 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 image build -t localhost/my-image:functional-659667 testdata/build --alsologtostderr: (3.004732381s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-659667 image build -t localhost/my-image:functional-659667 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 8fb363f5d27
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-659667
--> 0e1bfba3f47
Successfully tagged localhost/my-image:functional-659667
0e1bfba3f47906af1116a61a08bd9bcbb04235cb151571e556a3cb23c53939ba
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-659667 image build -t localhost/my-image:functional-659667 testdata/build --alsologtostderr:
I1205 20:33:53.080563  310114 out.go:345] Setting OutFile to fd 1 ...
I1205 20:33:53.080883  310114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:53.080896  310114 out.go:358] Setting ErrFile to fd 2...
I1205 20:33:53.080903  310114 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1205 20:33:53.081148  310114 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
I1205 20:33:53.081882  310114 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:53.082523  310114 config.go:182] Loaded profile config "functional-659667": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1205 20:33:53.082895  310114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:53.082937  310114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:53.099128  310114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38507
I1205 20:33:53.099738  310114 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:53.100418  310114 main.go:141] libmachine: Using API Version  1
I1205 20:33:53.100444  310114 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:53.100771  310114 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:53.100989  310114 main.go:141] libmachine: (functional-659667) Calling .GetState
I1205 20:33:53.103002  310114 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I1205 20:33:53.103054  310114 main.go:141] libmachine: Launching plugin server for driver kvm2
I1205 20:33:53.120289  310114 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33349
I1205 20:33:53.120876  310114 main.go:141] libmachine: () Calling .GetVersion
I1205 20:33:53.121425  310114 main.go:141] libmachine: Using API Version  1
I1205 20:33:53.121453  310114 main.go:141] libmachine: () Calling .SetConfigRaw
I1205 20:33:53.121780  310114 main.go:141] libmachine: () Calling .GetMachineName
I1205 20:33:53.122044  310114 main.go:141] libmachine: (functional-659667) Calling .DriverName
I1205 20:33:53.122216  310114 ssh_runner.go:195] Run: systemctl --version
I1205 20:33:53.122240  310114 main.go:141] libmachine: (functional-659667) Calling .GetSSHHostname
I1205 20:33:53.125320  310114 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:53.125712  310114 main.go:141] libmachine: (functional-659667) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:1c:e6:c6", ip: ""} in network mk-functional-659667: {Iface:virbr1 ExpiryTime:2024-12-05 21:31:12 +0000 UTC Type:0 Mac:52:54:00:1c:e6:c6 Iaid: IPaddr:192.168.50.108 Prefix:24 Hostname:functional-659667 Clientid:01:52:54:00:1c:e6:c6}
I1205 20:33:53.125750  310114 main.go:141] libmachine: (functional-659667) DBG | domain functional-659667 has defined IP address 192.168.50.108 and MAC address 52:54:00:1c:e6:c6 in network mk-functional-659667
I1205 20:33:53.125935  310114 main.go:141] libmachine: (functional-659667) Calling .GetSSHPort
I1205 20:33:53.126143  310114 main.go:141] libmachine: (functional-659667) Calling .GetSSHKeyPath
I1205 20:33:53.126349  310114 main.go:141] libmachine: (functional-659667) Calling .GetSSHUsername
I1205 20:33:53.126505  310114 sshutil.go:53] new ssh client: &{IP:192.168.50.108 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/functional-659667/id_rsa Username:docker}
I1205 20:33:53.209100  310114 build_images.go:161] Building image from path: /tmp/build.3604813776.tar
I1205 20:33:53.209164  310114 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1205 20:33:53.221110  310114 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3604813776.tar
I1205 20:33:53.230912  310114 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3604813776.tar: stat -c "%s %y" /var/lib/minikube/build/build.3604813776.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3604813776.tar': No such file or directory
I1205 20:33:53.230960  310114 ssh_runner.go:362] scp /tmp/build.3604813776.tar --> /var/lib/minikube/build/build.3604813776.tar (3072 bytes)
I1205 20:33:53.267626  310114 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3604813776
I1205 20:33:53.278754  310114 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3604813776 -xf /var/lib/minikube/build/build.3604813776.tar
I1205 20:33:53.289636  310114 crio.go:315] Building image: /var/lib/minikube/build/build.3604813776
I1205 20:33:53.289746  310114 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-659667 /var/lib/minikube/build/build.3604813776 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1205 20:33:55.977064  310114 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-659667 /var/lib/minikube/build/build.3604813776 --cgroup-manager=cgroupfs: (2.687272851s)
I1205 20:33:55.977193  310114 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3604813776
I1205 20:33:56.003250  310114 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3604813776.tar
I1205 20:33:56.023619  310114 build_images.go:217] Built localhost/my-image:functional-659667 from /tmp/build.3604813776.tar
I1205 20:33:56.023653  310114 build_images.go:133] succeeded building to: functional-659667
I1205 20:33:56.023660  310114 build_images.go:134] failed building to: 
I1205 20:33:56.023691  310114 main.go:141] libmachine: Making call to close driver server
I1205 20:33:56.023716  310114 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:56.024068  310114 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:56.024096  310114 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:56.024115  310114 main.go:141] libmachine: Making call to close driver server
I1205 20:33:56.024118  310114 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
I1205 20:33:56.024125  310114 main.go:141] libmachine: (functional-659667) Calling .Close
I1205 20:33:56.024431  310114 main.go:141] libmachine: Successfully made call to close driver server
I1205 20:33:56.024447  310114 main.go:141] libmachine: Making call to close connection to plugin binary
I1205 20:33:56.024474  310114 main.go:141] libmachine: (functional-659667) DBG | Closing plugin on server side
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.520483648s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-659667
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.54s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image load --daemon kicbase/echo-server:functional-659667 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 image load --daemon kicbase/echo-server:functional-659667 --alsologtostderr: (2.464011229s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image load --daemon kicbase/echo-server:functional-659667 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-659667
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image load --daemon kicbase/echo-server:functional-659667 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image save kicbase/echo-server:functional-659667 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 image save kicbase/echo-server:functional-659667 /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (3.05089669s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (3.05s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service list -o json
functional_test.go:1489: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 service list -o json: (1.189808409s)
functional_test.go:1494: Took "1.189968796s" to run "out/minikube-linux-amd64 -p functional-659667 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image rm kicbase/echo-server:functional-659667 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-linux-amd64 -p functional-659667 image load /home/jenkins/workspace/KVM_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (4.494522764s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (4.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.50.108:31304
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.50.108:31304
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-659667
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 image save --daemon kicbase/echo-server:functional-659667 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-659667
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 309212: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-659667 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [416b4d73-a736-4a68-bd5e-a8e3c13da534] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [416b4d73-a736-4a68-bd5e-a8e3c13da534] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 16.004549572s
I1205 20:33:51.223383  300765 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (16.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "284.61137ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "52.976154ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "297.312404ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "56.72642ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-659667 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.219.230 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-659667 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdany-port4222176365/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1733430831451201839" to /tmp/TestFunctionalparallelMountCmdany-port4222176365/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1733430831451201839" to /tmp/TestFunctionalparallelMountCmdany-port4222176365/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1733430831451201839" to /tmp/TestFunctionalparallelMountCmdany-port4222176365/001/test-1733430831451201839
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (244.768305ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 20:33:51.696304  300765 retry.go:31] will retry after 539.259236ms: exit status 1
2024/12/05 20:33:51 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec  5 20:33 created-by-test
-rw-r--r-- 1 docker docker 24 Dec  5 20:33 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec  5 20:33 test-1733430831451201839
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh cat /mount-9p/test-1733430831451201839
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-659667 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f9cece87-68e3-4028-9097-e78f32fa5992] Pending
helpers_test.go:344: "busybox-mount" [f9cece87-68e3-4028-9097-e78f32fa5992] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f9cece87-68e3-4028-9097-e78f32fa5992] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f9cece87-68e3-4028-9097-e78f32fa5992] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.005289936s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-659667 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdany-port4222176365/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdspecific-port514139262/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (211.91612ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1205 20:33:59.326894  300765 retry.go:31] will retry after 454.859224ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdspecific-port514139262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-659667 ssh "sudo umount -f /mount-9p": exit status 1 (221.227542ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-659667 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdspecific-port514139262/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.74s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-659667 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-659667 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-659667 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1875894225/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.76s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.03s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-659667
--- PASS: TestFunctional/delete_echo-server_images (0.03s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-659667
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-659667
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (198.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-689539 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 20:34:32.938353  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:36:49.077211  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:37:16.780777  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-689539 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m17.578842755s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (198.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (6.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-689539 -- rollout status deployment/busybox: (4.011293793s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-7ss94 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-ns455 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-qjqvr -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-7ss94 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-ns455 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-qjqvr -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-7ss94 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-ns455 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-qjqvr -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (6.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-7ss94 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-7ss94 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-ns455 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-ns455 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-qjqvr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-689539 -- exec busybox-7dff88458-qjqvr -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (58.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-689539 -v=7 --alsologtostderr
E1205 20:38:16.320007  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.326478  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.337985  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.360185  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.401693  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.483238  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.644909  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:16.967063  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:17.608887  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:18.891192  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:21.452854  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:38:26.575197  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-689539 -v=7 --alsologtostderr: (57.372389703s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (58.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-689539 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.89s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (13.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp testdata/cp-test.txt ha-689539:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539:/home/docker/cp-test.txt ha-689539-m02:/home/docker/cp-test_ha-689539_ha-689539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test_ha-689539_ha-689539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539:/home/docker/cp-test.txt ha-689539-m03:/home/docker/cp-test_ha-689539_ha-689539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test_ha-689539_ha-689539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539:/home/docker/cp-test.txt ha-689539-m04:/home/docker/cp-test_ha-689539_ha-689539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test_ha-689539_ha-689539-m04.txt"
E1205 20:38:36.817352  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp testdata/cp-test.txt ha-689539-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m02:/home/docker/cp-test.txt ha-689539:/home/docker/cp-test_ha-689539-m02_ha-689539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test_ha-689539-m02_ha-689539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m02:/home/docker/cp-test.txt ha-689539-m03:/home/docker/cp-test_ha-689539-m02_ha-689539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test_ha-689539-m02_ha-689539-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m02:/home/docker/cp-test.txt ha-689539-m04:/home/docker/cp-test_ha-689539-m02_ha-689539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test_ha-689539-m02_ha-689539-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp testdata/cp-test.txt ha-689539-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt ha-689539:/home/docker/cp-test_ha-689539-m03_ha-689539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test_ha-689539-m03_ha-689539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt ha-689539-m02:/home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test_ha-689539-m03_ha-689539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m03:/home/docker/cp-test.txt ha-689539-m04:/home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test_ha-689539-m03_ha-689539-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp testdata/cp-test.txt ha-689539-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1989065978/001/cp-test_ha-689539-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt ha-689539:/home/docker/cp-test_ha-689539-m04_ha-689539.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539 "sudo cat /home/docker/cp-test_ha-689539-m04_ha-689539.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt ha-689539-m02:/home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m02 "sudo cat /home/docker/cp-test_ha-689539-m04_ha-689539-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 cp ha-689539-m04:/home/docker/cp-test.txt ha-689539-m03:/home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 ssh -n ha-689539-m03 "sudo cat /home/docker/cp-test_ha-689539-m04_ha-689539-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (13.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (16.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-689539 node delete m03 -v=7 --alsologtostderr: (16.035288818s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (16.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (345.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 start -p ha-689539 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 20:51:49.076772  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:53:16.326016  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 20:54:39.390416  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 start -p ha-689539 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (5m44.33860381s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (345.13s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-689539 --control-plane -v=7 --alsologtostderr
E1205 20:56:49.076304  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 node add -p ha-689539 --control-plane -v=7 --alsologtostderr: (1m18.132341135s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-689539 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (79.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-222190 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E1205 20:58:16.320708  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-222190 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (55.860215948s)
--- PASS: TestJSONOutput/start/Command (55.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.71s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-222190 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.71s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.61s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-222190 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.61s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-222190 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-222190 --output=json --user=testUser: (7.346622075s)
--- PASS: TestJSONOutput/stop/Command (7.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-840437 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-840437 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (70.841245ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c3a18173-f03e-4bb2-b16c-0b2be7732a2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-840437] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"71f77631-0640-4904-aa8a-1a32c344c8c1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20053"}}
	{"specversion":"1.0","id":"7ee7b06b-398a-4d8a-b5ce-0a9292948ce0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"edcf1aae-88ba-4956-8d3f-259751fe6929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig"}}
	{"specversion":"1.0","id":"0419f18f-1910-444f-82fe-cffcdfac9324","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube"}}
	{"specversion":"1.0","id":"dcaa50c5-c110-4253-ab19-fc493f7d5357","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"e985dbc7-9ffc-417c-a3bd-443ca6c3fa5e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"b498e84f-0248-48ec-a629-08093d7366f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-840437" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-840437
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (87.13s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-659474 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-659474 --driver=kvm2  --container-runtime=crio: (41.783379372s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-675364 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-675364 --driver=kvm2  --container-runtime=crio: (42.100697903s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-659474
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-675364
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-675364" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-675364
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-675364: (1.057226728s)
helpers_test.go:175: Cleaning up "first-659474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-659474
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-659474: (1.028442419s)
--- PASS: TestMinikubeProfile (87.13s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (26.68s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-952337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-952337 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (25.676847383s)
--- PASS: TestMountStart/serial/StartWithMountFirst (26.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-952337 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-952337 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (34.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971782 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (33.625004729s)
--- PASS: TestMountStart/serial/StartWithMountSecond (34.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.40s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-952337 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.75s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.40s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-971782
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-971782: (1.282564719s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (22.93s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-971782
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-971782: (21.93121746s)
--- PASS: TestMountStart/serial/RestartStopped (22.93s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-971782 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.40s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-784478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E1205 21:01:49.076323  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:03:16.319524  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-784478 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.86223566s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-784478 -- rollout status deployment/busybox: (4.505587929s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-hkdvh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-tjfng -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-hkdvh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-tjfng -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-hkdvh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-tjfng -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.14s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-hkdvh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-hkdvh -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-tjfng -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-784478 -- exec busybox-7dff88458-tjfng -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.85s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (48.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-784478 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-784478 -v 3 --alsologtostderr: (47.607843603s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (48.19s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-784478 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.61s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.53s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp testdata/cp-test.txt multinode-784478:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478:/home/docker/cp-test.txt multinode-784478-m02:/home/docker/cp-test_multinode-784478_multinode-784478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test_multinode-784478_multinode-784478-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478:/home/docker/cp-test.txt multinode-784478-m03:/home/docker/cp-test_multinode-784478_multinode-784478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test_multinode-784478_multinode-784478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp testdata/cp-test.txt multinode-784478-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt multinode-784478:/home/docker/cp-test_multinode-784478-m02_multinode-784478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test_multinode-784478-m02_multinode-784478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m02:/home/docker/cp-test.txt multinode-784478-m03:/home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test_multinode-784478-m02_multinode-784478-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp testdata/cp-test.txt multinode-784478-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile478551597/001/cp-test_multinode-784478-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt multinode-784478:/home/docker/cp-test_multinode-784478-m03_multinode-784478.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478 "sudo cat /home/docker/cp-test_multinode-784478-m03_multinode-784478.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 cp multinode-784478-m03:/home/docker/cp-test.txt multinode-784478-m02:/home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 ssh -n multinode-784478-m02 "sudo cat /home/docker/cp-test_multinode-784478-m03_multinode-784478-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.53s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 node stop m03: (1.4626504s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-784478 status: exit status 7 (450.253492ms)

                                                
                                                
-- stdout --
	multinode-784478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-784478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-784478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr: exit status 7 (464.331365ms)

                                                
                                                
-- stdout --
	multinode-784478
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-784478-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-784478-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:04:38.859553  327447 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:04:38.859679  327447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:04:38.859689  327447 out.go:358] Setting ErrFile to fd 2...
	I1205 21:04:38.859692  327447 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:04:38.859902  327447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:04:38.860086  327447 out.go:352] Setting JSON to false
	I1205 21:04:38.860125  327447 mustload.go:65] Loading cluster: multinode-784478
	I1205 21:04:38.860240  327447 notify.go:220] Checking for updates...
	I1205 21:04:38.860729  327447 config.go:182] Loaded profile config "multinode-784478": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:04:38.860758  327447 status.go:174] checking status of multinode-784478 ...
	I1205 21:04:38.861345  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:38.861442  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:38.890752  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43765
	I1205 21:04:38.891339  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:38.892025  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:38.892058  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:38.892569  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:38.892799  327447 main.go:141] libmachine: (multinode-784478) Calling .GetState
	I1205 21:04:38.894795  327447 status.go:371] multinode-784478 host status = "Running" (err=<nil>)
	I1205 21:04:38.894826  327447 host.go:66] Checking if "multinode-784478" exists ...
	I1205 21:04:38.895141  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:38.895205  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:38.911795  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I1205 21:04:38.912281  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:38.912842  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:38.912874  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:38.913331  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:38.913545  327447 main.go:141] libmachine: (multinode-784478) Calling .GetIP
	I1205 21:04:38.916972  327447 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:04:38.917472  327447 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:04:38.917504  327447 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:04:38.917696  327447 host.go:66] Checking if "multinode-784478" exists ...
	I1205 21:04:38.918030  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:38.918082  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:38.935694  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35179
	I1205 21:04:38.936241  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:38.936856  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:38.936887  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:38.937296  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:38.937526  327447 main.go:141] libmachine: (multinode-784478) Calling .DriverName
	I1205 21:04:38.937784  327447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 21:04:38.937808  327447 main.go:141] libmachine: (multinode-784478) Calling .GetSSHHostname
	I1205 21:04:38.941656  327447 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:04:38.942287  327447 main.go:141] libmachine: (multinode-784478) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:da:fc:d3", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:01:56 +0000 UTC Type:0 Mac:52:54:00:da:fc:d3 Iaid: IPaddr:192.168.39.221 Prefix:24 Hostname:multinode-784478 Clientid:01:52:54:00:da:fc:d3}
	I1205 21:04:38.942318  327447 main.go:141] libmachine: (multinode-784478) DBG | domain multinode-784478 has defined IP address 192.168.39.221 and MAC address 52:54:00:da:fc:d3 in network mk-multinode-784478
	I1205 21:04:38.942501  327447 main.go:141] libmachine: (multinode-784478) Calling .GetSSHPort
	I1205 21:04:38.942745  327447 main.go:141] libmachine: (multinode-784478) Calling .GetSSHKeyPath
	I1205 21:04:38.942944  327447 main.go:141] libmachine: (multinode-784478) Calling .GetSSHUsername
	I1205 21:04:38.943143  327447 sshutil.go:53] new ssh client: &{IP:192.168.39.221 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478/id_rsa Username:docker}
	I1205 21:04:39.021292  327447 ssh_runner.go:195] Run: systemctl --version
	I1205 21:04:39.028201  327447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:04:39.044616  327447 kubeconfig.go:125] found "multinode-784478" server: "https://192.168.39.221:8443"
	I1205 21:04:39.044671  327447 api_server.go:166] Checking apiserver status ...
	I1205 21:04:39.044737  327447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1205 21:04:39.059967  327447 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1053/cgroup
	W1205 21:04:39.071058  327447 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1053/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1205 21:04:39.071135  327447 ssh_runner.go:195] Run: ls
	I1205 21:04:39.075770  327447 api_server.go:253] Checking apiserver healthz at https://192.168.39.221:8443/healthz ...
	I1205 21:04:39.080244  327447 api_server.go:279] https://192.168.39.221:8443/healthz returned 200:
	ok
	I1205 21:04:39.080301  327447 status.go:463] multinode-784478 apiserver status = Running (err=<nil>)
	I1205 21:04:39.080315  327447 status.go:176] multinode-784478 status: &{Name:multinode-784478 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1205 21:04:39.080338  327447 status.go:174] checking status of multinode-784478-m02 ...
	I1205 21:04:39.080718  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:39.080772  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:39.097560  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42687
	I1205 21:04:39.098053  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:39.098630  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:39.098655  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:39.099026  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:39.099238  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetState
	I1205 21:04:39.100858  327447 status.go:371] multinode-784478-m02 host status = "Running" (err=<nil>)
	I1205 21:04:39.100876  327447 host.go:66] Checking if "multinode-784478-m02" exists ...
	I1205 21:04:39.101190  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:39.101231  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:39.117990  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35895
	I1205 21:04:39.118512  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:39.119054  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:39.119077  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:39.119455  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:39.119722  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetIP
	I1205 21:04:39.122624  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | domain multinode-784478-m02 has defined MAC address 52:54:00:89:4b:0b in network mk-multinode-784478
	I1205 21:04:39.123086  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:4b:0b", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:03:00 +0000 UTC Type:0 Mac:52:54:00:89:4b:0b Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-784478-m02 Clientid:01:52:54:00:89:4b:0b}
	I1205 21:04:39.123129  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | domain multinode-784478-m02 has defined IP address 192.168.39.213 and MAC address 52:54:00:89:4b:0b in network mk-multinode-784478
	I1205 21:04:39.123242  327447 host.go:66] Checking if "multinode-784478-m02" exists ...
	I1205 21:04:39.123681  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:39.123731  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:39.140914  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34327
	I1205 21:04:39.141501  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:39.142084  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:39.142113  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:39.142502  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:39.142702  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .DriverName
	I1205 21:04:39.143021  327447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1205 21:04:39.143057  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetSSHHostname
	I1205 21:04:39.146235  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | domain multinode-784478-m02 has defined MAC address 52:54:00:89:4b:0b in network mk-multinode-784478
	I1205 21:04:39.146728  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:89:4b:0b", ip: ""} in network mk-multinode-784478: {Iface:virbr1 ExpiryTime:2024-12-05 22:03:00 +0000 UTC Type:0 Mac:52:54:00:89:4b:0b Iaid: IPaddr:192.168.39.213 Prefix:24 Hostname:multinode-784478-m02 Clientid:01:52:54:00:89:4b:0b}
	I1205 21:04:39.146763  327447 main.go:141] libmachine: (multinode-784478-m02) DBG | domain multinode-784478-m02 has defined IP address 192.168.39.213 and MAC address 52:54:00:89:4b:0b in network mk-multinode-784478
	I1205 21:04:39.146949  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetSSHPort
	I1205 21:04:39.147134  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetSSHKeyPath
	I1205 21:04:39.147277  327447 main.go:141] libmachine: (multinode-784478-m02) Calling .GetSSHUsername
	I1205 21:04:39.147416  327447 sshutil.go:53] new ssh client: &{IP:192.168.39.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/20053-293485/.minikube/machines/multinode-784478-m02/id_rsa Username:docker}
	I1205 21:04:39.233217  327447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1205 21:04:39.247418  327447 status.go:176] multinode-784478-m02 status: &{Name:multinode-784478-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1205 21:04:39.247461  327447 status.go:174] checking status of multinode-784478-m03 ...
	I1205 21:04:39.247859  327447 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I1205 21:04:39.247908  327447 main.go:141] libmachine: Launching plugin server for driver kvm2
	I1205 21:04:39.265458  327447 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39391
	I1205 21:04:39.266180  327447 main.go:141] libmachine: () Calling .GetVersion
	I1205 21:04:39.266750  327447 main.go:141] libmachine: Using API Version  1
	I1205 21:04:39.266772  327447 main.go:141] libmachine: () Calling .SetConfigRaw
	I1205 21:04:39.267189  327447 main.go:141] libmachine: () Calling .GetMachineName
	I1205 21:04:39.267401  327447 main.go:141] libmachine: (multinode-784478-m03) Calling .GetState
	I1205 21:04:39.269135  327447 status.go:371] multinode-784478-m03 host status = "Stopped" (err=<nil>)
	I1205 21:04:39.269156  327447 status.go:384] host is not running, skipping remaining checks
	I1205 21:04:39.269162  327447 status.go:176] multinode-784478-m03 status: &{Name:multinode-784478-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.38s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (39.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 node start m03 -v=7 --alsologtostderr
E1205 21:04:52.144525  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 node start m03 -v=7 --alsologtostderr: (38.366943512s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (39.01s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-784478 node delete m03: (1.917401632s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.46s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (184.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-784478 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-784478 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m4.339710398s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-784478 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (184.91s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (45.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-784478
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-784478-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-784478-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (82.292863ms)

                                                
                                                
-- stdout --
	* [multinode-784478-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-784478-m02' is duplicated with machine name 'multinode-784478-m02' in profile 'multinode-784478'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-784478-m03 --driver=kvm2  --container-runtime=crio
E1205 21:16:49.076192  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-784478-m03 --driver=kvm2  --container-runtime=crio: (43.916895325s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-784478
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-784478: exit status 80 (213.357461ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-784478 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-784478-m03 already exists in multinode-784478-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-784478-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-784478-m03: (1.045941994s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (45.31s)

                                                
                                    
x
+
TestScheduledStopUnix (116.24s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-923474 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-923474 --memory=2048 --driver=kvm2  --container-runtime=crio: (44.442277446s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-923474 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-923474 -n scheduled-stop-923474
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-923474 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1205 21:20:51.463139  300765 retry.go:31] will retry after 127.934µs: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.464292  300765 retry.go:31] will retry after 213.376µs: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.465472  300765 retry.go:31] will retry after 218.022µs: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.466606  300765 retry.go:31] will retry after 382.249µs: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.467724  300765 retry.go:31] will retry after 510.629µs: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.468935  300765 retry.go:31] will retry after 1.048216ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.470087  300765 retry.go:31] will retry after 1.036337ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.471236  300765 retry.go:31] will retry after 2.093801ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.473464  300765 retry.go:31] will retry after 3.55784ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.477705  300765 retry.go:31] will retry after 2.415319ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.480946  300765 retry.go:31] will retry after 5.477142ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.487204  300765 retry.go:31] will retry after 11.741771ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.499679  300765 retry.go:31] will retry after 15.8111ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.516160  300765 retry.go:31] will retry after 10.180288ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.527494  300765 retry.go:31] will retry after 16.290104ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
I1205 21:20:51.544793  300765 retry.go:31] will retry after 25.142321ms: open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/scheduled-stop-923474/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-923474 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-923474 -n scheduled-stop-923474
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-923474
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-923474 --schedule 15s
E1205 21:21:32.148362  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1205 21:21:49.076921  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-923474
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-923474: exit status 7 (75.763455ms)

                                                
                                                
-- stdout --
	scheduled-stop-923474
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-923474 -n scheduled-stop-923474
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-923474 -n scheduled-stop-923474: exit status 7 (78.271234ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-923474" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-923474
--- PASS: TestScheduledStopUnix (116.24s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (179.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3655267382 start -p running-upgrade-797218 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3655267382 start -p running-upgrade-797218 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m29.95589162s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-797218 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-797218 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m27.980695575s)
helpers_test.go:175: Cleaning up "running-upgrade-797218" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-797218
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-797218: (1.396397442s)
--- PASS: TestRunningBinaryUpgrade (179.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (91.79634ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-019732] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (94.53s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-019732 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-019732 --driver=kvm2  --container-runtime=crio: (1m34.265063367s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-019732 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (94.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-279893 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-279893 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (120.64831ms)

                                                
                                                
-- stdout --
	* [false-279893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=20053
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1205 21:22:06.303907  335702 out.go:345] Setting OutFile to fd 1 ...
	I1205 21:22:06.304076  335702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:22:06.304087  335702 out.go:358] Setting ErrFile to fd 2...
	I1205 21:22:06.304091  335702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1205 21:22:06.304286  335702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20053-293485/.minikube/bin
	I1205 21:22:06.304898  335702 out.go:352] Setting JSON to false
	I1205 21:22:06.306009  335702 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-2","uptime":14674,"bootTime":1733419052,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1071-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1205 21:22:06.306137  335702 start.go:139] virtualization: kvm guest
	I1205 21:22:06.308190  335702 out.go:177] * [false-279893] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I1205 21:22:06.309674  335702 out.go:177]   - MINIKUBE_LOCATION=20053
	I1205 21:22:06.309689  335702 notify.go:220] Checking for updates...
	I1205 21:22:06.312360  335702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1205 21:22:06.313760  335702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20053-293485/kubeconfig
	I1205 21:22:06.315419  335702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20053-293485/.minikube
	I1205 21:22:06.316580  335702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1205 21:22:06.317975  335702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1205 21:22:06.319978  335702 config.go:182] Loaded profile config "NoKubernetes-019732": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:22:06.320145  335702 config.go:182] Loaded profile config "force-systemd-env-024419": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:22:06.320292  335702 config.go:182] Loaded profile config "offline-crio-939726": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1205 21:22:06.320421  335702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1205 21:22:06.362684  335702 out.go:177] * Using the kvm2 driver based on user configuration
	I1205 21:22:06.365464  335702 start.go:297] selected driver: kvm2
	I1205 21:22:06.365501  335702 start.go:901] validating driver "kvm2" against <nil>
	I1205 21:22:06.365517  335702 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1205 21:22:06.367681  335702 out.go:201] 
	W1205 21:22:06.369061  335702 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1205 21:22:06.370302  335702 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-279893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-279893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-279893"

                                                
                                                
----------------------- debugLogs end: false-279893 [took: 3.042642202s] --------------------------------
helpers_test.go:175: Cleaning up "false-279893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-279893
--- PASS: TestNetworkPlugins/group/false (3.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (65.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --driver=kvm2  --container-runtime=crio: (1m3.84973201s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-019732 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-019732 status -o json: exit status 2 (264.658522ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-019732","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-019732
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-019732: (1.105424929s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (65.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (48.05s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-019732 --no-kubernetes --driver=kvm2  --container-runtime=crio: (48.052068201s)
--- PASS: TestNoKubernetes/serial/Start (48.05s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-019732 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-019732 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.319839ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:169: (dbg) Done: out/minikube-linux-amd64 profile list: (1.050870584s)
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-019732
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-019732: (1.289003903s)
--- PASS: TestNoKubernetes/serial/Stop (1.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (22.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-019732 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-019732 --driver=kvm2  --container-runtime=crio: (22.22343955s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (22.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-019732 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-019732 "sudo systemctl is-active --quiet service kubelet": exit status 1 (205.447626ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.56s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (100.13s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3826954583 start -p stopped-upgrade-262847 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3826954583 start -p stopped-upgrade-262847 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (50.27926388s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3826954583 -p stopped-upgrade-262847 stop
E1205 21:26:49.076258  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3826954583 -p stopped-upgrade-262847 stop: (1.441747138s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-262847 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-262847 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (48.406693748s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (100.13s)

                                                
                                    
x
+
TestPause/serial/Start (63.82s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-068873 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-068873 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m3.817312642s)
--- PASS: TestPause/serial/Start (63.82s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m10.714908047s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.72s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-262847
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (94.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
E1205 21:27:59.394411  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:28:16.319879  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m34.910377129s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (94.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-279893 "pgrep -a kubelet"
I1205 21:28:46.466504  300765 config.go:182] Loaded profile config "auto-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-7j2gb" [bc96c329-e7f2-4ea6-86c3-190804bf1479] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-7j2gb" [bc96c329-e7f2-4ea6-86c3-190804bf1479] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005568246s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (94.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m34.316465125s)
--- PASS: TestNetworkPlugins/group/calico/Start (94.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-jrhgp" [b726cba5-e0b6-4787-9c47-e0d3b5a92ff5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005910136s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-279893 "pgrep -a kubelet"
I1205 21:29:21.397422  300765 config.go:182] Loaded profile config "kindnet-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-42nv9" [f5ad209a-afed-476c-b2c6-9fcba075a1a4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-42nv9" [f5ad209a-afed-476c-b2c6-9fcba075a1a4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.005260301s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (95.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m35.219897746s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (95.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (99.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m39.743225119s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (99.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-w6rw8" [abadbfb2-8173-4e2b-a5ba-8876045a0b76] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006171367s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-279893 "pgrep -a kubelet"
I1205 21:30:53.874888  300765 config.go:182] Loaded profile config "calico-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-dstm8" [0207bdab-9971-4e4e-9152-54ae60bbefe2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-dstm8" [0207bdab-9971-4e4e-9152-54ae60bbefe2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.003979848s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (76.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m16.688520947s)
--- PASS: TestNetworkPlugins/group/flannel/Start (76.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-279893 "pgrep -a kubelet"
I1205 21:31:10.335154  300765 config.go:182] Loaded profile config "custom-flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-l6lfb" [41d4b2ff-02f1-432a-a8bf-6d43dc0f806f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-l6lfb" [41d4b2ff-02f1-432a-a8bf-6d43dc0f806f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.005366772s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (66.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-279893 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m6.337119211s)
--- PASS: TestNetworkPlugins/group/bridge/Start (66.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-279893 "pgrep -a kubelet"
I1205 21:31:29.517590  300765 config.go:182] Loaded profile config "enable-default-cni-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-5s2vv" [3e221625-2b92-4ee5-ab04-0053b2ef91aa] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-5s2vv" [3e221625-2b92-4ee5-ab04-0053b2ef91aa] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005644751s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (86.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425614 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425614 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m26.249086596s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (86.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-dq5ct" [729825ad-3131-4a7e-bc20-b77626544a75] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00482437s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-279893 "pgrep -a kubelet"
I1205 21:32:26.138017  300765 config.go:182] Loaded profile config "flannel-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-mznhv" [888cf5a0-55d5-4d73-893b-b1088a9e236c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-mznhv" [888cf5a0-55d5-4d73-893b-b1088a9e236c] Running
I1205 21:32:31.711525  300765 config.go:182] Loaded profile config "bridge-279893": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.31.2
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.00531783s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-279893 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (13.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-279893 replace --force -f testdata/netcat-deployment.yaml
net_test.go:149: (dbg) Done: kubectl --context bridge-279893 replace --force -f testdata/netcat-deployment.yaml: (1.230437888s)
I1205 21:32:32.957456  300765 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1205 21:32:33.641439  300765 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-4cjwj" [169caa0f-f9ac-4699-924a-d83b709841fc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-4cjwj" [169caa0f-f9ac-4699-924a-d83b709841fc] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004865378s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (13.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-279893 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-279893 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.13s)
E1205 22:02:19.896721  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (76.05s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-500648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-500648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m16.050759842s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (76.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 21:33:16.319600  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (1m14.102626856s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (74.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425614 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dcb099cf-71ce-4934-b4a9-70adc514e10f] Pending
helpers_test.go:344: "busybox" [dcb099cf-71ce-4934-b4a9-70adc514e10f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dcb099cf-71ce-4934-b4a9-70adc514e10f] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00384237s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-425614 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-425614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-425614 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.045723422s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-425614 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-500648 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [bc23d9cb-0d08-43e9-8bea-32e36edfe599] Pending
helpers_test.go:344: "busybox" [bc23d9cb-0d08-43e9-8bea-32e36edfe599] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1205 21:34:15.167203  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.173485  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [bc23d9cb-0d08-43e9-8bea-32e36edfe599] Running
E1205 21:34:15.185124  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.207110  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.248590  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.330774  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.492386  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:15.814226  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:16.456178  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004718251s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-500648 exec busybox -- /bin/sh -c "ulimit -n"
E1205 21:34:20.300186  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 create -f testdata/busybox.yaml
E1205 21:34:17.738076  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7f734192-b575-49f2-8488-2e08e14d83e5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7f734192-b575-49f2-8488-2e08e14d83e5] Running
E1205 21:34:25.422083  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/kindnet-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:34:27.781927  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/auto-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004497915s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-500648 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-500648 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-751353 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-751353 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (672.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-425614 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 21:36:08.106016  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.574057  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.580609  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.592160  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.613696  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.655264  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.736784  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:10.898132  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:11.220117  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:11.861881  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:13.144203  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:15.706380  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:36:20.828394  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-425614 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (11m12.412652687s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-425614 -n embed-certs-425614
E1205 21:47:19.896474  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (672.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (616.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-500648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-500648 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (10m16.270294918s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-500648 -n no-preload-500648
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (616.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-751353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 21:37:09.550060  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/calico-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:10.739233  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:19.896198  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:19.902664  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:19.914097  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:19.935626  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:19.977153  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:20.058710  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:20.220383  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:20.542717  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:21.184325  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:22.466024  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:25.029034  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:30.150552  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:32.514606  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/custom-flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:32.944518  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:32.950969  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:32.962446  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:32.983993  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:33.025523  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:33.107109  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:33.268810  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:33.590749  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:34.233000  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:35.514841  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:38.077090  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:40.392021  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:43.199247  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:51.701298  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/enable-default-cni-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:37:53.440642  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:00.874115  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/flannel-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-751353 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (9m5.890990476s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-751353 -n default-k8s-diff-port-751353
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (546.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-601806 --alsologtostderr -v=3
E1205 21:38:12.149969  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
E1205 21:38:13.922607  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-601806 --alsologtostderr -v=3: (5.31204286s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (5.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-601806 -n old-k8s-version-601806: exit status 7 (77.673042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-601806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (45.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
E1205 22:01:49.076429  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/addons-523528/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (45.854467601s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (45.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-185514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1205 22:02:32.944868  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/bridge-279893/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-185514 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.11920583s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (7.38s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-185514 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-185514 --alsologtostderr -v=3: (7.384762623s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (7.38s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185514 -n newest-cni-185514
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185514 -n newest-cni-185514: exit status 7 (78.175348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-185514 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (35.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-185514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-185514 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.31.2: (34.746510982s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-185514 -n newest-cni-185514
E1205 22:03:16.319659  300765 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20053-293485/.minikube/profiles/functional-659667/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (35.05s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-185514 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-185514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185514 -n newest-cni-185514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185514 -n newest-cni-185514: exit status 2 (256.833542ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185514 -n newest-cni-185514
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185514 -n newest-cni-185514: exit status 2 (254.157545ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-185514 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-185514 -n newest-cni-185514
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-185514 -n newest-cni-185514
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    

Test skip (34/315)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.35s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-amd64 -p addons-523528 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.35s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-279893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-279893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-279893"

                                                
                                                
----------------------- debugLogs end: kubenet-279893 [took: 3.216027857s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-279893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-279893
--- SKIP: TestNetworkPlugins/group/kubenet (3.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-279893 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-279893" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-279893

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-279893" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-279893"

                                                
                                                
----------------------- debugLogs end: cilium-279893 [took: 3.437784612s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-279893" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-279893
--- SKIP: TestNetworkPlugins/group/cilium (3.61s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-234383" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-234383
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard